Over the course of the last few months we implemented several improvements in SAP BW. We already blogged about two improvements separately:
In this blog we want to describe a few, additional improvements
Some BW queries were not using intra-query parallelism, because the BW statement generator did not generate the MAXDOP N hint. This was particularly an issue, when using SQL Server 2014 columnstore. As of SQL Server 2016, queries on columnstore are still pretty fast, even when using MAXDOP 1. See https://techcommunity.microsoft.com/t5/Running-SAP-Applications-on-the/SQL-Server-2016-improvements-... for details. The following SAP notes fix the issues with missing MAXDOP:
The 06-tables are used as a performance improvement for BW queries with large IN-filters, for example the filter condition "WHERE COMPANY_CODE IN (1, 2, 3, ... n)". When the IN-clause contains 50 elements or more, a temporary 06-table (for example /BI0/0600000001) is created and filled with the elements of the IN-clause. Then an UPDATE STATISTICS is executed on the 06-table. Finally, the 06-table is joined with the (fact-) table rather than applying the IN-filter on the (fact-) table. After executing the BW-query, the 06-table is truncated and added to a pool of 06-tables. Therefore, the 06-tables can be reused for future BW queries.
Historically, the 06-tables do not have a primary key nor any other database index. Therefore, no DB statistics exists and an UPDATE STATISTICS command does not have any effect. However, SQL Server automatically creates a column statistics, if the 06-table contains at least 500 rows. Smaller 06-tables do not have a DB statistics, which might result in bad execution plans and long running BW queries.
This issue has not been analyzed for a long time, because it is self-healing: Once a 06-table ever had at least 500 rows, it has a column statistics. Therefore, the UPDATE STATISTICS works fine when the 06-table is reused, even if it contains now less than 500 rows.
As a workaround, you can simply disable the usage of 06-tables by setting the RSADMIN parameter MSS_EQSID_THRESHOLD = 999999999. Therefore, you have to apply the following SAP note first:
As a matter of course, this workaround has some drawbacks. The SQL queries get more complicated when turning off the 06-tables. Therefore, we made a code change, which changes the structure of the 06-tables. They now contain a primary key and have regular DB statistics. For using this, you have to implement the following SAP note and delete the existing 06-tables using report SAP_DROP_TMPTABLES
This note is not generally available yet, because we would like to have some pilot customers first. You can apply as a pilot customer by opening an SAP message in component BW-SYS-DB-MSS.
You will get best BW cube compression performance in SAP release 7.40 and newer.
For all BW releases we introduced a HASH JOIN and a HASH GROUP optimizer hint for cubes with columnstore indexes. We already had delivered a HASH JOIN hint in the past, which was counterproductive in some cases for rowstore cubes. Therefore we had removed that hint again. By applying the following SAP Note, the HASH JOIN hint will be used for columnstore tables only. Furthermore, you can configure it the hints using RSADMIN parameters:
The last step of the BW cube compression is the request deletion of compressed requests. Therefore, the partitions of the compressed requests are dropped. It turned out, that the partition drop of a table with more than 10,000 partitions takes very long. On the one hand, it is not recommended to have more than a few hundred partitions. On the other hand, we need a fast way to compress a BW cube with 10,000+ partitions, because BW cube compression is the only way to reduce the number of partitions. We released the following SAP notes, which speed-up partition drop when having several thousand of partitions:
For large BW systems with a huge amount of partitions and columnstore rowgroups, Single Table Analysis in SAP transaction DBACOCKPIT takes very long. Therefore, you should update the stored procedure sap_get_index_data, which is attached to the following SAP Note:
In the past we used the columnstore only for the e-fact table of SAP BW realtime cubes. Data is loaded into the e-fact table only during BW cube compression. The cube compression ensures that DB statistics are up-to-date and the columnstore rowgroups are fully compressed. The data load into the f-fact table in planning mode does not ensure this. Due to customer demands you can now use the columnstore for selected or all realtime cubes. For this have to apply the following note, which also enables realtime cubes to use the Flat Cube model:
We increased the r3load import performance when using table splitting with columnstore by optimizing the report SMIGR_CREATE_DDL. With the new code, r3load will never load into a SQL Server HEAP any more. Hereby you can minimize the downtime for database migrations and system copies. Furthermore, we decreased the runtime of report RS_BW_POST_MIGRATION when using columnstore. For this, you have to apply the following SAP notes:
Customers automatically benefit from all system copy improvements on SQL Server, when applying the new patch collection of the following SAP Note:
In addition, you should also check the following SAP Note before performing a system copy or database migration
The SAP BW code for SQL Server is permanently being improved. All optimizations for SQL Server columnstore are documented in the following note:
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.