Users and DBAs frequently use cron to schedule jobs to run off hours. For a federated data warehouse, where there are many teams, tables, and schemas, users and DBAs are expected to monitor their own tables, loads, runstats, and other maintenance and monitoring jobs. As each team runs independently, they need the ability to schedule and run their own jobs according to their team's business needs. As well, many historical customers have previously written scripts and jobs that should easily transfer to any appliance.
One attempted workaround was to introduce another server to run cron jobs. This idea proved to be very difficult as a remote server requires a password to connect to the database. Due to security, the passwords cannot be stored unencrypted. As a result, the only viable option appears to be to change AUTHENTICATION to CLIENT. However, this too is not a recommended best practice for any security mechanism.
Another possible workaround is to use db2 scheduling. Again this is not pratical to rewrite the many cron jobs that already exist. This is an overwhelming and expensive amount of rework for something that has been working for years as is without security or maintenance overhead.
Last, for some activities like RUNSTATS or other maintenance routines, AUTO MAINT routines were investigated. But this option does not prevent scheduling problems when "EXTERNAL TABLES" or CREATE TABLE AS SELECT" are attempting to run and lock system catalogs. As well, AUTO MAINT throttles the maintenance making it hard to estimate how long the commands will run with different workloads on many different tables.
|Who would benefit from this IDEA?||All customers and users that need to schedule jobs. So all users of the appliance are impacted.|
How should it work?
Reenable cron jobs. It is unclear how or why this was disabled but typically all Linux and UNIX users are allowed to run crontab -e to create files in /var/spool/cron directory. All users can (and frequetly do) create crontab entries to schedule jobs. In a containized environment, the /var/spool/cron directory should be saved and restored for an upgrade or link /var/spool/cron to a persistent location that all users can access. If the option is to copy the directory before an upgrade, then the directory should be automatically restored post upgrade. If a link to a persistent directory is utilized, the data should be untouched during an upgrade.
|Priority Justification||This impacts many users. It is expensive to rewrite what is already working. This needs to be corrected for next Gen of Db2 appliance|
|IBM's success depends on gathering feedback from customers like yourself. Aha Ideas Portal is the third party tool through which IBM Offering Managers gather feedback from customers such as yourself.|
|IBM is a global organization with business processes, management structures, technical systems and service provider networks that cross borders. As such, the information collected through Aha Ideas Portal (Customer Name, Customer Email Address) will be stored by them in the United States, and handled only as per IBM's instructions and policies. Your data (Name and Email Address) will NOT be shared with other IBM customers.|
|In order to safeguard your information in Aha, do not leave your workstation unattended while using this application, log off after using it, and print only if necessary. If you need to make a hardcopy, remember to pick up the print-out immediately, keep it under lock, and destroy it immediately when no longer needed.|
|NOTICE TO EU RESIDENTS: per EU Data Protection Policy, if you wish to remove your personal information from the IBM ideas portal, please login to the ideas portal using your previously registered information then change your email to "email@example.com" and first name to "anonymous" and last name to "anonymous". This will ensure that IBM will not send any emails to you about all idea submissions|