Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


Status Not under consideration
Workspace Db2
Components Monitoring
Created by Guest
Created on Aug 29, 2019

Getting the Accessplans at the Routine level for the Database LUW

Db2 performance issues in one of the major area to look for, Whenever there is performance issue in the database, one of the major point we look is for the queries how they are behaving in the database.

Db2 LUW provides lot of options to get the access plans at the query level like db2expln,db2exfmt,dsmtop,db2pd etc.

Say if a customer is saying he is facing slowness in the system then we will drill down to the queries that are using CPU,I/O, scans it does etc.

Bunch of SQL's will be embedded in the routine and they get precompiled in the package cache.

My point here is upto now there is no utility in db2 LUW which does the task of getting the accessplans for the queries inside the routine.

Example: If a routine say Fare_calculation() has around 300 SQL's inside it, then getting cost for each sql is time consuming process,

So it is good idea to have a utility like say db2routineplan which gets the accessplans at the routine level

for all the routines inside the database and also at the individual routine level.

 

Benefits of the idea

===============

1) It will save the time of the dba or developer who is looking at the cost of the queries at the routinelevel

2) it will help in diagnosing the routine performance and slowness of it

3) It helps in the performance area of the individual sqls.

 

Current work around

================

 

1) Get all the packages list at the routine leve for the database

2) Store the package names and stored procedure in  a file

3) In a loop operate db2expln for those packages which inturn gets the accessplans at the routine level

Needed by Date May 31, 2020
  • Guest
    Reply
    |
    Sep 5, 2019

    Great idea!!!

  • Guest
    Reply
    |
    Sep 4, 2019

    Very useful quick performance fix possible

  • Guest
    Reply
    |
    Sep 4, 2019

    It's quite easy to explain all routines, or a subset, I still do not understand why you would want to do that, but here's a simple Db2 script that will explain all routines that match a pattern:

    connect to mydb;

    create or replace procedure explainRoutine( schema varchar(128), name varchar(128))

    language sql

    begin atomic

    DECLARE explain_schema VARCHAR(128) default NULL; --

    DECLARE requestor VARCHAR(128); --

    DECLARE explain_time timestamp; --

    DECLARE source_name VARCHAR(128); --

    DECLARE source_schema VARCHAR(128); --

    DECLARE source_version VARCHAR(64); --

    DECLARE sectno int; --

    DECLARE continue handler for sqlstate value '55075'

    BEGIN

    end; --

    for v as cur1 cursor for

    select R.ROUTINESCHEMA, R.ROUTINENAME, R.SPECIFICNAME, P.PKGSCHEMA, P.PKGNAME, P.TOTAL_SECT from SYSCAT.ROUTINEDEP D, SYSCAT.ROUTINES R, SYSCAT.PACKAGES P

    where rtrim(R.routineschema) like schema

    and R.routinename like name

    and D.BTYPE='K'

    and D.routinename=R.specificname

    and D.BSCHEMA=P.PKGSCHEMA

    and D.BNAME=P.PKGNAME

    and R.ROUTINESCHEMA not like 'SYS%'

    do

    set sectno=1; --

    call dbms_output.put_line( '"' || V.ROUTINESCHEMA || '"."' || V.ROUTINENAME || '" ("' || V.SPECIFICNAME || '"): "' || V.PKGNAME || '" sections: ' || V.TOTAL_SECT); --

    while (sectno <= v.total_sect) do

    call EXPLAIN_FROM_CATALOG(V.PKGSCHEMA, V.PKGNAME, '', sectno, explain_schema, requestor, explain_time, source_name, source_schema, source_version); --

    set sectno=sectno + 1; --

    end while; --

    end for; --

    end;

    call sysproc.sysinstallobjects('EXPLAIN','C',NULL,CURRENT USER);

    delete from explain_instance;

    set serveroutput on;

    call explainRoutine('%','%');

    select count(*) from explain_instance;

    connect reset;

    !db2exfmt -d mydb -1 -w % -g -o exp.out;

  • Guest
    Reply
    |
    Sep 4, 2019

    Good initiative and I'm already looking at an opportunity of utilizing this idea for our account, good one.

  • Guest
    Reply
    |
    Sep 4, 2019

    Much needed one on DB2 environment.  Very good idea and initiative.  Kudos!!

  • Guest
    Reply
    |
    Sep 4, 2019

    Good one. way to go.

  • Guest
    Reply
    |
    Sep 4, 2019

    Very good , Thank you 

  • Guest
    Reply
    |
    Sep 4, 2019

    we have already tried to address this scenario (as I understand it) in a number of ways. For example, we have introduced MON_GET_ROUTINE to help identify which routines are expensive, MON_GET_ROUTINE_EXEC_LIST to find out which statements in those routines are expensive, and once you know the statement executable ID, you can use the appropriate EXPLAIN_FROM_* routine to extract the access plan from the section of the statement (as outlined in the technote mentioned by Robert).

    It is not clear to me what the shortcomings is of the provided solution and why a (very) different solution is being asked for. The only scenario I can imagine is if you chose to get the explain data for all the SQL rather than just the one of interest (which seems inefficient).

    If you have specific scenarios where there are gaps in the current capabilities, that would help us better understand why the requested approach is desired.

    Thanks.

  • Guest
    Reply
    |
    Sep 4, 2019

    Good initiative appreciated. 

  • Guest
    Reply
    |
    Sep 3, 2019

    It doesn't makes sense to get an access plan for a routine. An access plan is for a query. A routine is either inlined, in which case it does not have its own access plan, it is incorporated into the invoking query. If the routine is not inlined, it will have one or more queries embedded in it. And we already have tools to explain those embedded queries. For example. for a compiled SQL routine, you can use SET_ROUTINE_OPTS to enable explain when you create the routine. Alternatively, you can use EXPLAIN_FROM_CATALOG to explain the queries embedded within a routine.

    Please see this tech note:
    https://www-01.ibm.com/support/docview.wss?uid=swg21279292

    Explaining all the queries in all the routines in the database would not be a good idea, it would probably be too huge to be useful.

  • Guest
    Reply
    |
    Sep 3, 2019

    Excellent DB2 Performance enhancing uility .. Great Work Raghav

  • Guest
    Reply
    |
    Sep 3, 2019

    Would be useful tool for analyzing performance issue.

  • Guest
    Reply
    |
    Sep 3, 2019

    This is very good idea. This will safe a lot of time for DBA.

  • Guest
    Reply
    |
    Sep 3, 2019

    Great idea Buddy. It will really help the DBAs a lot.

  • Guest
    Reply
    |
    Sep 3, 2019

    It would be a handy tool to have

  • Guest
    Reply
    |
    Sep 3, 2019

    Excellent initiative.. We can try to integrate with BF. Very useful idea :)

  • Guest
    Reply
    |
    Sep 3, 2019

    Very Good initiative, It will reduce more pain in finding  the performance issues cost wise to fix the queries.. which will save more time.

  • Guest
    Reply
    |
    Sep 2, 2019

    Well done, good initiative

     

  • Guest
    Reply
    |
    Sep 2, 2019

    Very interesting approach !

  • Guest
    Reply
    |
    Sep 2, 2019

    Good useful tool 

  • Load older comments