Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Search results: Connectivity

Showing 13

Enhancement request - IBM Cloud Pak for Data – Redshift Connector

Hi IBM team, Kindly find an enhancement request for - IBM Cloud Pak for Data – Redshift Connector Problem: Using IBM Cloud Pak for Data –Redshift Connector (with WRITE MODE=LAOD), empty source tables are not getting loaded to target AWS-Redshift ....
about 1 year ago in Connectivity 0 Delivered

Currently we are failing when we send data file having more than 10MB in size to Cloud object storage via DataStage 11.7

Here is error we are receiving on DataStage 11.7 log (PRDCEDPDSENG01.W3-969.IBM.COM) . PutFile_COSBucket,0: Fatal Error: CDIER0410E: Error in step=REST, cause=com.ibm.e2.provider.ws.exceptions.RESTWSRequestException: CDIER0905E: The REST step foun...
over 7 years ago in Connectivity 1 Delivered

The REST step found a large input data item whose size actual data size exceeds the supported size limit maximum data size li

DataStage job is not able to import the file of size 350MB using xml hierarchical stage. When we tried to import the file of lesser size the response from REST API call is Success. Where as in this case the REST API response is Fail. In this case ...
almost 8 years ago in Connectivity 3 Delivered

Allow enterprise CA certificate injection #cpfield

In order to connect to corporate assets in this enterprise the secure connection needs to be signed by that enterprise certificate authority. In order for cloudpak for data to be enterprise ready it needs a function to allow the injection of custo...
over 4 years ago in Connectivity 4 Delivered

Support High Availablity for namenode in File Connector stage

As with most big data installations, we have a primary and a secondary namenode for the cluster. When the primary namenode fails over to the secondary namenode (or vice versa), a Big Integrate Data Stage job using the File Connector WebHDFS access...
about 8 years ago in Connectivity 2 Delivered

Test and provide documentation to provide support for more recent Parquet file formats.

The current Parquet/Orc file instructions provide support for Parquet 1.9.0, which is now 3.5 years old. We have vendors who want to use Parquet to feed their big data lakes, and we are struggling to support them with this old format, even in an i...
over 5 years ago in Connectivity 2 Delivered

timeout value is set to 5min by default within the redshift-connector.

We are encountering some issues when we try to load the data from DB2>Redshift. The data from DB2 extracted to a file in AVRO format -> Uploaded in S3 -> Calls a COPY command on the Redshift Cluster. The problem is, COPY does execute howe...
almost 3 years ago in Connectivity 0 Delivered

Extending temp file creation in BQ to include microseconds

The file names are not having the full microseconds. it is stopping with only 3 milliseconds and all the files are getting created with 4 zeroes at the end. node 4 and node 21 tried to create the file with the same name bigQueryTempFile20210415133...
over 4 years ago in Connectivity 0 Delivered

File Connector - reject link not supported

Hi, We are using Parquet format files and there appears to be an issue with File Connector not supporting the Reject links for Parquet. This is a road blocker for our NextGen Migration as existing BDFS stage jobs will be replaced with File Connect...
almost 5 years ago in Connectivity 1 Delivered

BQ connector (Source ) to use Gzip Compress option

While reading from BQ table , In the background the connector should be using a Gzip compression method to compress the file and load to GCS, From GCS bucket the data should be read.This process will generate the most optimal performance while rea...
almost 5 years ago in Connectivity 0 Delivered