Aug 16 2020 02:38 PM
A table in a database has a size of nearly 2 TB. Even though the server is running with 8 CPUs, 80 GB RAM and very fast Flash disks, performance is bad. Is this just what to expect with a table as big as this?
For some reasons, the server never been set up with any maintenance scripts. No Index defragmentation, no update statistics. But trying to run this takes ages and makes the server run even worse.
This big table has some indexes setup. One for primary key, and when I run a script like this https://basitaalishan.com/2012/07/06/find-the-size-of-index-in-sql-server/ to tell me the size of the indexes, I am told that the PK-index has a size of 1.8 TB. I really do not understand how this can be as the disk on the server is less than 4 TB and also carries other databases. The index and the table itself will take up more than the disk available. So how to understand Index size?
Listing fragmentation on the index tells me that the PK index only has a fragmentation of 1% - this is probably because data are never deleted. Then I assume that index optimization will not help. But how about update statistics. What does this do, and can this benefit performance in my case?
Regards, Lars.
Jan 09 2022 02:23 PM