Hello Team,
As we discussed in previous posts either Azure SQL Database and Azure SQL Managed Instance we have the partitioning option in both databases. In Azure SQL Database there is not possible to specify the filegroup however Azure SQL Managed Instance is. Right now, we are going to show you an example to split a partition that our customer reached the maximum number of partitions.
We're going to review the article posted here and here where we have the TSQL commands syntax and best practices that we need to review before going to split the partitioned table.
-------------------------------------------------------- -- First we are going to create the partition -------------------------------------------------------- CREATE PARTITION FUNCTION PF_HASH_BY_VALUE (BIGINT) AS RANGE LEFT FOR VALUES (100000, 200000, 300000, 400000, 500000, 600000, 700000, 800000, 900000) ---------------------------------------------------------- -- Second we create the partition scheme ---------------------------------------------------------- CREATE PARTITION SCHEME PS_HASH_BY_VALUE AS PARTITION PF_HASH_BY_VALUE ALL TO ([PRIMARY]); ------------------------------------------------------------------------------------ -- Third, we are going to create the table and insert some data ------------------------------------------------------------------------------------ CREATE TABLE [TBL_PARTITION] ( [MY_VALUE] [bigint] NOT NULL, CONSTRAINT [PK_TBL_PARTITION] PRIMARY KEY CLUSTERED ([MY_VALUE] ASC) ) ON PS_HASH_BY_VALUE ([MY_VALUE]) insert into [TBL_PARTITION] (my_value) values(100001) insert into [TBL_PARTITION] (my_value) values(200001) insert into [TBL_PARTITION] (my_value) values(300001) insert into [TBL_PARTITION] (my_value) values(400001) insert into [TBL_PARTITION] (my_value) values(500001) insert into [TBL_PARTITION] (my_value) values(600001) insert into [TBL_PARTITION] (my_value) values(700001) insert into [TBL_PARTITION] (my_value) values(800001) insert into [TBL_PARTITION] (my_value) values(900001) insert into [TBL_PARTITION] (my_value) values(100002) insert into [TBL_PARTITION] (my_value) values(200002) insert into [TBL_PARTITION] (my_value) values(300002) insert into [TBL_PARTITION] (my_value) values(400002) insert into [TBL_PARTITION] (my_value) values(500002) insert into [TBL_PARTITION] (my_value) values(600002) insert into [TBL_PARTITION] (my_value) values(700002) insert into [TBL_PARTITION] (my_value) values(800002) insert into [TBL_PARTITION] (my_value) values(900002) insert into [TBL_PARTITION] (my_value) values(1000002) insert into [TBL_PARTITION] (my_value) values(1100002) insert into [TBL_PARTITION] (my_value) values(1200002) insert into [TBL_PARTITION] (my_value) values(1300002) insert into [TBL_PARTITION] (my_value) values(1400002) insert into [TBL_PARTITION] (my_value) values(1500002) insert into [TBL_PARTITION] (my_value) values(1600002) insert into [TBL_PARTITION] (my_value) values(1700002) insert into [TBL_PARTITION] (my_value) values(1800002) insert into [TBL_PARTITION] (my_value) values(1900002) insert into [TBL_PARTITION] (my_value) values(2000002)
Right now, after running the previous process, running the next one we found the following situation:
SELECT MY_VALUE, $PARTITION.PF_HASH_BY_VALUE(MY_VALUE) AS HASH_IDX FROM ( SELECT MY_VALUE FROM [TBL_PARTITION] ) AS TEST (MY_VALUE);
In fact, we could find the distribution of the data is not correct if we execute the following TSQL:
SELECT object_name(object_id),* FROM sys.dm_db_partition_stats where object_name(object_id)='TBL_PARTITION'
In this situation, basically, we need to split the data to the next value and add a new partition schema.
ALTER PARTITION FUNCTION PF_HASH_BY_VALUE () SPLIT RANGE (1100000) ALTER PARTITION SCHEME PS_HASH_BY_VALUE NEXT USED [Primary]
Now, it is time to review what happened after the execution of splitting process:
SELECT MY_VALUE, $PARTITION.PF_HASH_BY_VALUE(MY_VALUE) AS HASH_IDX FROM ( SELECT MY_VALUE FROM [TBL_PARTITION] ) AS TEST (MY_VALUE);
Enjoy!