Forum Widgets
Latest Discussions
Delete Statement slowness
Hello, We are running a delete statement on database every weekend where it deletes rows based on the 16 where conditions. It has to pass 16 where conditions before delete more than millions row. Since delete statement itself is resource consuming activity and takes time to delete more than a millions of data. Is it something these many where conditions causing this issue? our DB environment is like :- SQL Server 2022 Ent Version with latest patch TempDB files added considering the number of vCPUS on VM, also TempDB is on separate disk NDF Files of secondary filegroups on separate disk for better throughput MaxDOP is set to default Data and log files are on separate disks. Weekly DB maintenance plan is in place. regards, Nitinnitinshete1975Dec 12, 2024Copper Contributor29Views0likes2CommentsImport from Excel changes Header Names
I have a wide table in Excel with column names like Q1.1, Q1.2, etc. However, when I import this table using SSMS, the column names are renamed to Q1#1#, Q1#2#, and so on. This renaming is visible when you look at Column Mapping before actually importing. Is there a way to prevent this renaming? If not, can I write some code to rename the columns afterward? I have 96 columns in this dataset, which I need to upload monthly. This issue is becoming a nuisance because I want SQL to clean up this dataset before pivoting it and appending it to my final table. Currently, I manually clean up the data in Excel, pivot it, and then upload a tall file with only 4 columns. This method avoids the renaming issue because all the question numbers become values in a column rather than column names.Martin LucasDec 12, 2024Copper Contributor26Views0likes2CommentsWhat is the best practice to create documentation for our SQL Server
I know that we have to documents all our risk assessments and backup and recovery strategy but for Database Administrator to handover the task to other Database Admin, is there any best practice what to documents? Also I am aware of the 3rd party tool to help the documentation, but based on your experience what are the important things to document? Thank youelly_watiDec 11, 2024Copper Contributor23Views0likes1CommentWhat are the best practices for managing and optimizing a SQL table that contains 10 million records
I have a table that currently holds 10 million records, and it continues to grow daily. As the dataset expands, I'm encountering significant challenges in retrieving data efficiently and generating reports. The increasing volume of data is causing performance bottlenecks, leading to slow query execution times and delays in report generation. To address these issues, I need to implement strategies for optimizing data retrieval and managing the growing dataset effectively and to ensure that the system remains responsive and capable of handling the increasing data load.VallikannuDec 09, 2024Copper Contributor58Views0likes2CommentsHelp in Query to calculate the % from last two timelines
I want to compare the FAmt from last recent two lines and if difference is more than 80% then it should be in the result. The group/index keys are A,B,C,D, Edate. Create table #AR (A int, B char(10), C char(10), D char(10), Edate datetime, FAmt decimal (6,2), G char(2)) Insert into #AR values ('AA','CCC','1','E', '2022-01-01',12.60,'2') Insert into #AR values ('AA','CCC','1','E', '2023-01-01',15.80,'2') Insert into #AR values ('AA','CCC','1','E', '2024-01-01',18.60,'2') Insert into #AR values ('BB','DCC','1','E', '2022-01-01',11.40,'2') Insert into #AR values ('BB','DCC','1','E', '2024-01-01',15.60,'2') Insert into #AR values ('CC','DCC','1','E', '2021-01-01',12.60,'2') Insert into #AR values ('CC','GCC','2','E', '2022-01-01',15.60,'2') Insert into #AR values ('CC','GCC','2','E', '2023-04-01',18.60,'2') Insert into #AR values ('CC','GCC','2','E', '2024-04-01',34.80,'2') --Note: This FAmt is more than 80% from last timeline and it should be in the expected result Expected result: A B C D Edate FAmt G Comments CC GCC 2 E 2024-04-01 34.80 2 Current Amount is > 80% from last timelineSolvedKenny_GuaDec 06, 2024Copper Contributor54Views0likes6CommentsCDC- change data capture
Hi, Trying to run ALTER TABLE on cdc getting below error. i have tried disabling cdc and renabling. still nothing works. Can anyone assist? Cannot alter the Error: 8277, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped. 'Table ' because it is being used for Change Data Capture. Regardskevinfr820Dec 06, 2024Copper Contributor5Views0likes0Commentssql query to check status change of an item
I have a table dbo.tblPresentationStatus (sql script at the bottom ) I have to select rows where the status change is wrong. Flow should be as this: Review --> approve --> presentation --> close From 'Review' the status for a specific presentation ID can change to either 'Approve' or 'Presentation' or 'Close' From 'Approve' the status for a specific presentation ID can change to either 'Presentation' or 'Close' From 'Presentation' the status for a specific presentation ID can change to only 'Close' I want to write query to return Presentations with wrong status flow. So expected output as per records given in sql script at the bottom should be : PrID | Status1 | Status2 | Status3 | Status 4 103 | REVIEW | PRESENTATION | APPROVE |CLOSE 101 | APPROVE | REVIEW | NULL | NULL -----------------------------------sql script------------------------------------------- DROP TABLE IF EXISTS#tblPresentation CREATE TABLE#tblPresentation ( PrID int NOT NULL, PrName nvarchar(100) NULL ) INSERT INTO#tblPresentationVALUES (100,'PrA'), (101,'PrB'), (102,'PrC'), (103,'PrD') DROP TABLE IF EXISTS#tblPresentationStatus CREATE TABLE#tblPresentationStatus ( StatusID int NOT NULL, PrID int NOT NULL, PrStatus nvarchar(100) NOT NULL, StatusDate datetime NOT NULL ) INSERT INTO#tblPresentationStatusVALUES --PRESENTATION ID 100 (1,100,'REVIEW','2024-01-01 00:00:00.00'), (2,100,'APPROVE','2024-01-02 00:00:00.00'), (3,100,'PRESENTATION','2024-01-03 07:00:00.00'), (4,100,'CLOSE','2024-01-03 10:00:00.00'), --PRESENTATION ID 101 (5,101,'APPROVE','2024-01-01 00:00:00.00'), (6,101,'REVIEW','2024-01-03 10:00:00.00'), --wrong status change from 'APPROVE' to back ward status of ' REVIEW ' is not allowed --PRESENTATION ID 102 (7,102,'REVIEW','2024-01-01 00:00:00.00'), (8,102,'PRESENTATION','2024-01-02 00:00:00.00'), (9,102,'CLOSE','2024-01-03 10:00:00.00'), --PRESENTATION ID 103 (10,103,'REVIEW','2024-01-01 00:00:00.00'), (11,103,'PRESENTATION','2024-01-02 00:00:00.00'), (12,103,'APPROVE','2024-01-03 00:00:00.00'), --wrong status change from 'PRESENTATION' to back ward status of ' APPROVE' is not allowed (13,103,'CLOSE','2024-01-04 00:00:00.00')SqlPuzzleDec 03, 2024Copper Contributor21Views0likes1CommentNested Query Slowing Performance
Hi, At the beginning of my SQL code I have a cte target table called TARGETS, which contains 3 columns TARGET_NAME, MODE AND VALUE. Further down in my query I am trying to use a nested query to retrieve values from this table, however, it is really slowing down the performance. I am wondering if there is a better way to do this? An example of the nested query looks like this SUM(CASE WHEN LEG = 'Empty' AND LOCATION = 'Perth' THEN (SELECT VALUE FROM TARGETS WHERE TARGET_NAME = 'Empty Perth' AND MODE = 'Yes') WHEN LEG = 'Empty' AND LOCATION= 'Melbourne' THEN (SELECT VALUE FROM TARGETS WHERE TARGET_NAME= 'Empty Melbourne' AND MODE = 'No') etc... END) / COUNT(DISTINCT(LEG)) AS Target FROM LEG_DETAILS Many Thanks,leerjones85Dec 03, 2024Copper Contributor45Views0likes5CommentsSticky Notes Account Login and Sync Issue Report
I have a habit of using sticky notes as a daily memo tool, and everything has been functioning normally until recently. On the morning of Friday, November 29th, I suddenly discovered that my sticky notes were unable to sync. After testing by logging out and logging back in, I encountered the message: "Your account does not have an email address, so your sticky notes cannot be saved to the cloud." Since then, I have been unable to log in. That day, I contacted Microsoft Support. An engineer conducted remote testing and found that it was not a system issue, as other accounts could log in successfully. However, the account I normally use, email address removed for privacy reasons, could not log in to Sticky Notes. They advised me to contact customer service and get assistance from the account department. The next day, on Saturday, I contacted the account department. They asked me to log in to Microsoft using my regular account, which I was able to do without issue. However, I still could not log in to Sticky Notes, leaving the account department uncertain about the cause of the problem. They then transferred me to the Office department, but they were also unable to resolve the issue.Billy0821Dec 03, 2024Copper Contributor8Views0likes0CommentsIssue with importing json file
I am using the following script to read data from a json file and import into SQL server. Data import works without any issue. However, I noticed that when the json file contains special characters (Ex: è) is replaces it with è I already saved the json file with UTF-8. Is there anyway to import the actual character? drop table if exists #Test3 CREATE TABLE #Test3 ( EMP_COMMON_FULL_NAME NVARCHAR(500), EMP_COMMON_PRIMARY_JOB NVARCHAR(500), PEOPLE_EMAIL NVARCHAR(500), EMP_COMMON_PRIMARY_ORG NVARCHAR(500), PEOPLE_BADGE_NUMBER NVARCHAR(500), PEOPLE_EXPECTED_WEEKLY_HOURS NVARCHAR(500), PEOPLE_EXPECTED_DAILY_HOURS NVARCHAR(500) ); DECLARE @json NVARCHAR(MAX); SELECT @json = BulkColumn FROM OPENROWSET(BULK 'F:\PowerBI\json\file2.json', SINGLE_CLOB, CODEPAGE = '65001') AS jsonFile; INSERT INTO #Test3 ( EMP_COMMON_FULL_NAME, EMP_COMMON_PRIMARY_JOB, PEOPLE_EMAIL, EMP_COMMON_PRIMARY_ORG, PEOPLE_BADGE_NUMBER, PEOPLE_EXPECTED_WEEKLY_HOURS, PEOPLE_EXPECTED_DAILY_HOURS ) SELECT JSONData.EMP_COMMON_FULL_NAME, JSONData.EMP_COMMON_PRIMARY_JOB, JSONData.PEOPLE_EMAIL, JSONData.EMP_COMMON_PRIMARY_ORG, JSONData.PEOPLE_BADGE_NUMBER, JSONData.PEOPLE_EXPECTED_WEEKLY_HOURS, JSONData.PEOPLE_EXPECTED_DAILY_HOURS FROM OPENJSON(@json, '$.data.children') WITH ( EMP_COMMON_FULL_NAME NVARCHAR(500) '$.attributes[0].value', EMP_COMMON_PRIMARY_JOB NVARCHAR(500) '$.attributes[1].value', PEOPLE_EMAIL NVARCHAR(500) '$.attributes[2].value', EMP_COMMON_PRIMARY_ORG NVARCHAR(500) '$.attributes[3].value', PEOPLE_BADGE_NUMBER NVARCHAR(500) '$.attributes[4].value', PEOPLE_EXPECTED_WEEKLY_HOURS NVARCHAR(500) '$.attributes[5].value', PEOPLE_EXPECTED_DAILY_HOURS NVARCHAR(500) '$.attributes[6].value' ) AS JSONData; SELECT * FROM #Test3; { "metadata": { "numNodes": "500", "metadataKey": "fewfewf-sdvrv-evfewsv", "cacheKey": "fewfewf-sdvrv-evfewsv", "currencyCode": "NONE", "totalNodes": "500", "totalElements": "500" }, "data": { "key": { "ROOT": "-1" }, "coreEntityKey": {}, "attributes": [], "children": [ { "key": { "PEOPLE": "67648" }, "coreEntityKey": { "EMP": { "id": "11111" } }, "attributes": [ { "key": "EMP_COMMON_FULL_NAME", "alias": "Name", "rawValue": "Sam, Rogers", "value": "Sam, Rogers" }, { "key": "EMP_COMMON_PRIMARY_JOB", "alias": "Primary Job", "rawValue": "Accountant", "value": "Accountant" }, { "key": "PEOPLE_EMAIL", "alias": "Email Address", "rawValue": "email address removed for privacy reasons", "value": "email address removed for privacy reasons" }, { "key": "EMP_COMMON_PRIMARY_ORG", "alias": "Location", "rawValue": "1ère Inc/1ère Inc", "value": "1ère Inc/1ère Inc" }, { "key": "PEOPLE_BADGE_NUMBER", "alias": "Active Badge Number", "rawValue": "1234", "value": "1234" }, { "key": "PEOPLE_EXPECTED_WEEKLY_HOURS", "alias": "Weekly Hours", "rawValue": "35.75", "value": "35.75" }, { "key": "PEOPLE_EXPECTED_DAILY_HOURS", "alias": "Daily Hours", "rawValue": "7.8", "value": "7.80" } ], "children": [], "summaryListDisplay": [], "rootEntity": "PEOPLE", "customProperties": {} }, { "key": { "PEOPLE": "22222" }, "coreEntityKey": { "EMP": { "id": "22222" } }, "attributes": [ { "key": "EMP_COMMON_FULL_NAME", "alias": "Name", "rawValue": "F3irst1, Last1", "value": "F3irst1, Last1" }, { "key": "EMP_COMMON_PRIMARY_JOB", "alias": "Primary Job", "rawValue": "Cl3erk", "value": "Cl3erk" }, { "key": "PEOPLE_EMAIL", "alias": "Email Address", "rawValue": "email address removed for privacy reasons", "value": "email address removed for privacy reasons" }, { "key": "EMP_COMMON_PRIMARY_ORG", "alias": "Location", "rawValue": "1ère 3Inc/1ère Inc", "value": "1ère 3Inc/1ère Inc" }, { "key": "PEOPLE_BADGE_NUMBER", "alias": "Active Badge Number", "rawValue": "23222", "value": "23222" }, { "key": "PEOPLE_EXPECTED_WEEKLY_HOURS", "alias": "Weekly Hours", "rawValue": "30.0", "value": "30.00" }, { "key": "PEOPLE_EXPECTED_DAILY_HOURS", "alias": "Daily Hours", "rawValue": "30.0", "value": "30.00" } ], "children": [], "summaryListDisplay": [], "rootEntity": "PEOPLE", "customProperties": {} } ], "summaryListDisplay": [], "rootEntity": "ROOT", "customProperties": {} } }dp3450Nov 30, 2024Copper Contributor33Views0likes1Comment
Resources
Tags
- Data Warehouse64 Topics
- Integration Services58 Topics
- sql server43 Topics
- Reporting Services41 Topics
- Business Intelligence35 Topics
- SQL35 Topics
- Analysis Services29 Topics
- Business Apps22 Topics
- Analytics18 Topics
- Big Data11 Topics