Power Pivot data model file size

Copper Contributor

I have a power pivot data model which extracts a large table using a SQL query, it has around 150 million rows. There are around 100 months of data in the table. Every month, the table is extracted to include the latest month. The file size is around 3 GB. This latest month, when the table was downloaded from sql, it increased in size to 4 GB. How can this be? Is there any documentation regarding how power pivot compresses data? I have downloaded the latest month into power pivot in isolation and the file is only 0.06 GB in size. Does power pivot’s compression become a lot less efficient past a certain file size? Any information would be greatly appreciated

2 Replies


Good question on which I have no direct answer, if only not to play with concrete file. In general, in behind of Power Pivot is VertiPaq engine, which has the same logic for any tabular model - Power Pivot, Power BI, etc. It has formula engine and storage engine, the latest transforms raw data into data cache with which data model works. Exact algorithms is Microsoft proprietary information which is not shared. However, lot of materials around. VertiPaq optimization steps are briefly described here Vertipaq optimization and its impact on columnstore compression - Simple Talk (red-gate.com) , on each step that could be chosen different algorithms.

From practical point of view I'd install DAX Studio (it's free) and check in View Metrics -> VertiPaq Analyser what has significant impact on this or that case. In any case that's not liner dependency on rows number.

One of useful posts is Optimizing High Cardinality Columns in VertiPaq - SQLBI , but again, it's lot of them. Just first what found. 

Thank you so much!