We're using IDAA loader V 2.1 to run backups for very huge AOTs. When IDAA loader takes a backup from an IDAA AOT it uses SQL multirow fetch to transfer the data from IDAA into a z/OS sequential file. This may work fine for small AOTs. But in our environment the AOTs are very big and the SQL fetch mechanism to backup the data is not a good choice since this leads to very high CPU usage and very long elapse times. For example we determine a 70 time longer elapse time, 35 time higher CPU usage, 2.8 time higher I/O-Count and 46 time higher SU count (including zIIP-SU for the IDAA loader Backup Utility) compared to a full DSNUTILB IC for a table containing the same data in Db2 for z/OS and on IDAA. (51'239'384'238 Rows)
From a product like the IDAA loader we expect a much better backup performance. There should be another mechanism than SQL fetch to transfer the data from IDAA to z/OS. For example DSNUTILB uses direct access paths to the VSAM Cluster to read the data instead of fetching it.
|Who would benefit from this IDEA?||All customers using the AOT backup function.|
How should it work?
Get data with an internal IDAA mechanism instead through Db2 SQL fetch.
|Priority Justification||For succeeding our project we need much higher backup performance very soon.|
|Customer Name||Zuercher Kantonalbank|
NOTICE TO EU RESIDENTS: per EU Data Protection Policy, if you wish to remove your personal information from the IBM ideas portal, please login to the ideas portal using your previously registered information then change your email to "firstname.lastname@example.org" and first name to "anonymous" and last name to "anonymous". This will ensure that IBM will not send any emails to you about all idea submissions