Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

All ideas

Showing 479

Addition of Column Reordering Feature in IBM Data Refinery

As a user of IBM Data Refinery, I appreciate the platform's robust capabilities for data preparation and cleansing. However, I have identified a significant functionality gap that I believe could greatly enhance the user experience and productivit...
2 months ago in Cloud Pak for Data as a Service 0 Submitted

Send IDAA internal logs to an external Syslog Server

We want to send all (administrative) logs from the IDAA to our internal Syslog Server via a secured (TLS1.3) connection. This means all activities and sign-on/offs that are currently logged internally only, shall be sent to the Syslog Server. To g...
4 months ago in Db2 Analytics Accelerator for z/OS 0 Submitted

Backups to use temp dbspaces with most closlely matching page size.

For backups store the before images it temporary dbspaces which most closely match the pagesize of the dbspace being backed up. Presumably this would be more efficent. E.g. If backing up a 16k dbspace prefer putting before images in a 16k,8k or 4k...
28 days ago in Informix / Informix Server 0 Submitted

Allow temp tables and backup before images to handle temp dbspaces which fill up.

If temporary tables/backup before images are spread across multiple temp dbspaces and one fills then put the data/page into the temp dbspaces which have space.This would stop backups/jobs failing.
28 days ago in Informix / Informix Server 0 Submitted

Add %ADD_MONTHS Expression to Support Rolling Date Filtering

We need to be able to replicate only the data created within last 12 months based on the CREATE_TSTMP column. There is a workaround: have a Java exit for this callable via %USERFUNC. However, even that solution is not provided by IBM and has to be...
14 days ago in Replication: Change Data Capture 0 Submitted

Make Lost Connection Errors Recoverable

Currently, when CDC replication fails on Target side due to Connection Reset (possibly due to a firewall issue) the error is marked non-recoverable. This causes a replication outage that requires manual restart of the affected subscriptions. To av...
15 days ago in Replication: Change Data Capture 0 Submitted

Big Query connector uses legacy API by default which is adding 90 minutes of wait time for the streaming buffer to be flushed for DML operations to be performed. We need functionality in BQ connector to be able to use Storage write API to avoid this wait time

If a BQ job abends for some reason due to connectivity issue etc, the application team has to wait for 90 min to restart the job 2nd time. if this issue occurs in critical flow, it is causing SLA miss which is causing business impact
15 days ago in DataStage 1 Submitted

Ability to add users (from user groups) to individual assets

Currently, if a user group is added to access a catalog, individuals who are part of that user group, cannot be added to assets. A large Federal customer requests the ability to both: 1) add user groups to catalogs (exists today), & 2) add ind...
29 days ago in Cloud Pak for Data / Watson Knowledge Catalog 0 Submitted

Allow Notebooks to be deployed to Deployment Space

The inconsistency of what can be deployed to a Space seems in need of attention. We should be able to deploy a Jupyter Notebook to a deployment space. We have been using notebooks as job controllers in a sense, for datastage flows/pipelines, and b...
3 months ago in Cloud Pak for Data as a Service 1 Submitted

Prebuilt row count function on Cognos Report

We've noticed a challenge with our reports: users often need to gauge the scope of the data they're dealing with, which currently isn't straightforward in Cognos Reporting. While our reports can generate outputs ranging from 10,000 to 3 million re...
about 1 month ago in Cognos Analytics / Reporting 0 Submitted