Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Search results: DataStage

Showing 144 of 5237

gzip support with filter option on sequential file stage when using multiple files option

Our use case is that we need to split data into multiple files by "x" key, and as the data is huge (MB to GBs) we want to also have the files compressed while being written to. The option works when just writing the data into a single file.
almost 2 years ago in DataStage 0 Not under consideration

Datastage supporting generic API's to an external security provider

Background:GDPR is bringing an eminent need to safely secure sensitive (customer-) data. This is bringing new requirements to the tools that have access to this data. DataStage, due to the functional nature of the tool, is requiring extremely high...
almost 7 years ago in DataStage 0 Not under consideration

Process to enable TLS 1.3 on DataStage versions

Need process to enable/upgrade TLS 1.3 for DataStage 11.7.1.X versions. This will help to enable new security features and meet the audit. Product :InfoSphere Information Server: Data Integration (DataStage)
about 1 year ago in DataStage 1 Delivered

SAP PACK, ODP Delta-initialization for classification datasources

We extract classification data from SAP , in intial-mode with ODP this takes about 10 minutes. Not being able to run the datasources in delta-mode this is a performance issue. I reported this bug as a case but this way was unfortunately not succes...
about 2 years ago in DataStage 0 Not under consideration

Orchestration Pipeline to be compatible with OpenShift Pipeline

The client has deployed cluster-wide OpenShift Pipeline for any automation/configuration work that could be used. This in turn has failed Orchestration Pipeline to successfully deploy its jobs as they fail due to using OpenShift Pipeline service a...
about 1 year ago in DataStage 0 Not under consideration

Distributed Transaction

Add stage distributed transaction in palette of IBM Cloud Pack for Data.
about 2 years ago in DataStage 1 Delivered

Improve the performance of Teradata Connector when doing BULK read

Teradata bulk export of ~7 million rows from Datastage using TD connector takes over ~30 minutes as compared on using TPT export (that is also executed on the DS engine) which only took around ~3-4 minutes. The timing differences are significant, ...
over 1 year ago in DataStage 1 Under review

Adding feature for more frequent commits when using DB2 connector

When loading big amount of data into DB2 for Warehousing, we are exceeding DB2 limitation 300GB which means that we need to load the data with more than just one commit. However when using external tables in DataStage, DB2 connector ingores the co...
about 4 years ago in DataStage 1 Delivered

Databricks connector

Adding a Databricks connector to DataStage could bring several benefits. Here are a few reasons why it could be a good idea: Enhanced Data Integration: A Databricks connector would allow seamless integration between DataStage and Databricks, enabl...
over 1 year ago in DataStage 0 Functionality already exists

Enhancing DataStage Monitoring and Debugging Capabilities

One of the critical aspects of monitoring, tuning, or debugging individual jobs or an entire DataStage project is to clearly understand which database system activities are triggered by which stage in a specific DataStage job within a particular D...
over 1 year ago in DataStage 1 Under review