Thursday, March 22, 2007

SAP BW and DB2 for z/OS V8

Best Practices - Top Ten SAP BW recommendations
Here are a base set of recommendations for SAP BW implementations on DB2 z/OS. Customers who follow these recommendations have demonstrated greater maintenance and query stability, resulting in better performance of the zSeries SAP BW installation.

1. Set proper DB2 ZPARMS.
For more information, refer to 7.1, “General z/OS settings and ZPARMS” on page 121 of the IBM Redbook Best Practices for SAP Business Information Warehouse on DB2 for z/OS V8, SG24-6489-00.
2. Ensure that the user selects the option to run RUNSTATS after data loads.
For more information, refer to 7.3, “Runstats” on page 131 of the IBM Redbook Best Practices for SAP Business Information Warehouse on DB2 for z/OS V8, SG24-6489-00.
A cost-based optimizer relies on current and correct statistics to ensure optimal SQL access path selection. In an SAP BW environment, statistics do not require re-collection unless the object's data content has altered based upon a load, SAP compression, rollup, or data archive. Therefore, only these activities require the updating of DB2 statistics.
With a load, new packets are loaded into the F fact table. Without knowledge of the newly loaded data, the optimizer may not choose an optimal access path for subsequent reporting queries, aggregate rollups, or SAP compressions that specify a search criteria for the individual packet number. For this reason, it is important to ensure RUNSTATS is triggered as part of the load .
There are two ways to run RUNSTATS:
a. Manually: RSA1 -> 'Manage' infocube -> tab strip 'performance' -> 'Create Statistics (Btch)'
b. Process chains: If you are already using process chains to load data, then add RUNSTATS NODE to the process chain. For a further optimized RUNSTATS capability
with process chains, see SAP Note 778437.
3. Keep current with SAP support packs.
Each new SAP support pack delivers fixes and performance enhancements based upon prior customer requirements, or on identified areas for improvement by joint DB2 and SAP development.
4. Utilize SAP compression (E fact table).
For more information, refer to “Overview” on page 35 of the IBM Redbook Best Practices for SAP Business Information Warehouse on DB2 for z/OS V8, SG24-6489-00.
When F fact table packets are consolidated into the E fact table via a process known as "SAP compression", then queries on the E fact table filter by meaningful dimension/master data characteristics rather than by the arbitrary packet number. This can result in better exploitation of available indexes on the E fact table, since the F fact table must carry an index led by packet number to improve compression and rollup performance, but not query performance. Given the volatility of the F fact table, RUNSTATS is required more frequently than on the E fact table. Therefore, keeping the F fact table a moderate size by compressing packets to E can improve overall RUNSTATS performance and the cost for the entire infocube.
SAP compression can reduce the impact of index drop and recreate during infocube data loads. SAP compression is a prerequisite for enabling partitioning on the E fact table.
Finally, the most important reason for SAP compression is for non-cumulative key figures whereby an additional row is inserted or updated in the E fact table that has the same combination of dimensions id but is infinite for time. Rather than requiring you to sum all delta values from the F fact table, the E fact table reference point can be accessed directly, thus improving query performance.
5. Partition the E fact table via SAP.
For more information, refer to 5.4.4, “Partitioned E-fact table” on page 53 of the IBM Redbook Best Practices for SAP Business Information Warehouse on DB2 for z/OS V8, SG24-6489-00.
The E fact table (result of SAP compression) can be partitioned via SAP, which provides a greater opportunity for improved query performance and elapsed time, and also a potential reduction in operational costs. Partitioning the E fact table increases the opportunity for DB2 parallelism, which can significantly reduce the query elapsed time. Similarly, queries with time-based filtering will be able to eliminate unnecessary partitions from being accessed, regardless of the filtering dimensions or indexes chosen for fact table access. These two benefits also result in greater access path stability for queries against the infocube.
An additional benefit for partitioning is the ability for partition-level operations such as COPY utility, REORG, or RUNSTATS to be invoked by the DBA. With time-based partitions, historical partitions do not require further REORG/RUNSTATS because their data will not change over time. An increase in E fact table size should not result in a significant increase in operational maintenance costs. For these reasons, we recommend that you partition the E fact table and use SAP compression.
6. Use DB2 compression.
DB2 hardware compression is one of the strengths that differentiates zSeries from other platforms. Compression rates of 70% and greater are possible for large SAP BW objects, such as fact tables, ODS and large Aggregates. The four main benefits of compression include:
a. Reduced disk space
b. Improved elapsed time for sequential processing (for example, queries, copy utility)
c. Improved bufferpool hit ratio (more rows per page)
d. Reduced log data volume where insert intensive processing
With the compression dictionary moving above the 2 GB bar in V8--which may result in virtual storage relief for DBM1--there is more motivation for customers to exploit the pronounced compression capabilities of zSeries.
7. Keep current with the SAP certified DB2 maintenance level.
As with SAP software, each maintenance release of DB2 provides additional fixes and enhancements based upon prior customer problems and requirements. To minimize exposure to known problems, it is recommended that you stay current with the SAP certified maintenance level and apply additional PTFs as identified in SAP notes 390016 and 81737. For more information, refer to 6.5, “PTF checker” on page 113 of the IBM Redbook Best Practices for SAP Business Information Warehouse on DB2 for z/OS V8, SG24-6489-00.
8. Utilize aggregates.
DB2 z/OS has a heritage founded on operational applications where high performance data updates can be as much a priority as data retrieval performance. Also, DB2 is unhindered by the update challenges of warehouse-specific index designs. Both issues point to the DB2 ability to provide very strong aggregate build performance, which is beneficial given the SAP recommendation that this is the main tool for improving query performance in an SAP BW environment.
It is common for SAP customers to use the off-peak window, the quietest period, which should be exploited fully to build as many aggregates as necessary to provide the required query response times for users. Customers with a CPU chargeback system may also benefit from pushing this work to off-peak hours, and thus incur off-peak CPU rates. And for customers supporting multiple time zones, the sophisticated IRD/WLM capabilities can be utilized to ensure that the off-hours aggregate builds for one time zone do not impact the online workload for another time zone.
9. Use the SAP transaction/programs to remove unused data.
Master data tables can be preloaded with SAP default data and loaded with data by a user that never gets utilized. This unused data can disrupt the cost-based optimizer's estimates of how many rows will be returned--or it can result in this data being unnecessarily retrieved for every query involving that table. Therefore, we recommend that you purge unwanted data by using the following methods:
Trancode RSRV for dimension data
The program to delete master data is RSDMD_DEL_BACKGROUND or the functional Module RSDMD_DEL_MASTER_DATA.
Program RSCDS_DEL_OLD_REQUESTS to remove obsolete rows from aggregate F tables (SAP note 609164).
10.Use the multi-provider approach carefully.
If the multiprovider is made up of infocubes with the same data model, but the cubes cannot use structure-specific infoobject properties (for example, the cubes are made up of disjoint sets of data, where the partitioning characteristic has a range of values in each cube), then BW must query each cube in order to retrieve the result. However, using "homogenous multi provider" is useful if the underlying cubes have a constant value defined for a partitioning characteristic, and queries restrict on that characteristic. Thus, only cubes that contain relevant data are accessed (SAP note 629541). Therefore, we recommend that you use the multi-provider approach wisely to ensure the optimizer can adequately determine which cubes to access. The result will be improved query performance.

.
Special Notices
This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.

Download

No comments: