Clustered Column Store Index: Bulk Loading the Data
Published Mar 23 2019 01:29 PM 396 Views
First published on MSDN on Jul 27, 2014

Clustered Column Store: Bulk Load

As described in the blog , the clustered column store index has been optimized for typical DW scenario supporting nightly or trickle data load with fast query performance. Multiple inserts can load the data in parallel concurrently while DW queries are being run in read uncommitted transaction isolation level.

This blog describes locking behavior when data is inserted through Bulk Load command. Here is the table we will use in the example

Create table t_bulkload (

accountkey                 int not null,

accountdescription          nvarchar (50),

accounttype                nvarchar(50),

AccountCodeAlternatekey    int)

Bulk loading into CCI

A more common scenario is to bulk import data into CCI. The bulk import loads the data into delta store if the batch size is < 100K rows otherwise the rows are directly loaded into a compressed row group. Let us walk through an example of illustrate Bulk Load

-- Let us prepare the data

-- insert 110K rows into a regular table

begin tran

declare @i int = 0

while (@i < 110000)


insert into t_bulkload values (@i, 'description', 'dummy-accounttype', @i*2)

set @i = @i + 1



-- bcp out the data... run the following command in command window

bcp adventureworksDW2012..t_bulkload out c:\temp\t_bulkoad.dat -c -T

As the next step, let us truncate the table t_bulkload and create a clustered columnstore index on it. At this time, there are no rowgroups as the table has no rows

--truncate the table

Truncate table t_bulkload

-- convert row clustered index into clustered columnstore index

CREATE CLUSTERED COLUMNSTORE index t_bulkload_cci on t_bulkload

Now, we will bulk import the data with a batchsize > 102400 as follows. Notice, I am running this command under a transaction. This will help us identify us to see what locks are taken

-- now bulkload the data

begin tran

bulk insert t_bulkload

FROM 'c:\temp\t_bulkoad.dat'



BATCHSIZE = 103000


-- show rowgroups

select * from sys.column_store_row_groups where object_id = object_id('t_bulkload')

The output below shows that there are two row groups created. . First row group with row_group_id=0 is ‘compressed’ with 103000 rows. This is because the batchsize >= 102400, the SQL Server will directly compress this row group. This is a useful because Bulk Load is a common scenario to load the data into a Data Warehouse. With directly compressing the rows, SQL Server can minimize logging (details ) as the rows do not go through delta row group. Also, there is no need for tuple mover to move the data. The second batch had only 7000 row because we ran out of rows in the data file (remember, we the data file had only 110000 rows) and this set of rows are inserted into delta row group ‘1’. Note, that the row group is still marked ‘OPEN’ meaning that it is not closed. It will eventually get closed and eventually compressed by the background ‘tuple mover’ when the number of rows hit 1 million row mark.

Let us now look at the locks. Here is the output. Note that for we have X lock on both the delta row group and compressed row group. Taking lock at row group level minimizes the locking overhead.

You may wonder what will happen if we insert a row from another session. Let us just do that

begin tran

insert into t_bulkload values (-1, 'single row', 'single row', -1)

Now let us look the row groups. You will note that the new row actually was inserted into new delta row group as hi-lighted below because the Bulk Insert transaction holds an X lock on row group = 1. SQL Server allows the INSERT operation to succeed instead of blocking it because INSERT is a common operation for DW therefore maximum concurrency is needed. The down side is that now you have two open delta row groups. The future inserts can go into any of these row groups so in the worst case you may have 2 million rows in the delta row groups before they get compressed. This will impact the DW query performance because part of the query accessing rows from delta row group is not as efficient.

Hope this blog clarifies how data is bulk imported into clustered columnstore index. In most case, there are no issues if you are loading large amount of data. In the worst case, I expect the number of delta row groups will be same as degree of concurrency for Bulk Import Operations.


Sunil Agarwal

Version history
Last update:
‎Mar 23 2019 01:29 PM
Updated by: