Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Clear

Search results: Connectivity

Showing 40 of 2082

Currently we are failing when we send data file having more than 10MB in size to Cloud object storage via DataStage 11.7

Here is error we are receiving on DataStage 11.7 log (PRDCEDPDSENG01.W3-969.IBM.COM) . PutFile_COSBucket,0: Fatal Error: CDIER0410E: Error in step=REST, cause=com.ibm.e2.provider.ws.exceptions.RESTWSRequestException: CDIER0905E: The REST step foun...
over 7 years ago in Connectivity 1 Delivered

Decimal Data Data type support for DataStage File Connector

Currently DataStage 11.5 does not support Decimal Data type with its File Connector. This RFE is to request the File Connector decimal data type support.
almost 8 years ago in Connectivity 2 Not under consideration

The REST step found a large input data item whose size actual data size exceeds the supported size limit maximum data size li

DataStage job is not able to import the file of size 350MB using xml hierarchical stage. When we tried to import the file of lesser size the response from REST API call is Success. Where as in this case the REST API response is Fail. In this case ...
almost 8 years ago in Connectivity 3 Delivered

Allow enterprise CA certificate injection #cpfield

In order to connect to corporate assets in this enterprise the secure connection needs to be signed by that enterprise certificate authority. In order for cloudpak for data to be enterprise ready it needs a function to allow the injection of custo...
over 4 years ago in Connectivity 4 Delivered

Allow File Connector to manage Hive tables properly when they are located in HDFS Transparent Data Encryption Zones

When the File Connector is configured to write files to an encrypted zone (TDE) within HDFS, with "Create Hive Table" set to Yes and "Drop existing table" set to Yes, jobs will fail if the table already exists. This is because Hive requires the PU...
about 8 years ago in Connectivity 0 Not under consideration

Support High Availablity for namenode in File Connector stage

As with most big data installations, we have a primary and a secondary namenode for the cluster. When the primary namenode fails over to the secondary namenode (or vice versa), a Big Integrate Data Stage job using the File Connector WebHDFS access...
about 8 years ago in Connectivity 2 Delivered

Full RFC 4180 support for File Connector / Flat File Stage

Full RFC 4180 support for File Connector / Flat File Stage enables the Stage to read well formatted flat files without any script support / preformating.
over 7 years ago in Connectivity 0 Functionality already exists

Support for Avro Logical Data Types in Google Cloud Storage Connector Stage

Related to existing idea - "Support for Avro Logical Data Types in File Connector Stage"
over 4 years ago in Connectivity 0 Future consideration

Test and provide documentation to provide support for more recent Parquet file formats.

The current Parquet/Orc file instructions provide support for Parquet 1.9.0, which is now 3.5 years old. We have vendors who want to use Parquet to feed their big data lakes, and we are struggling to support them with this old format, even in an i...
over 5 years ago in Connectivity 2 Delivered

timeout value is set to 5min by default within the redshift-connector.

We are encountering some issues when we try to load the data from DB2>Redshift. The data from DB2 extracted to a file in AVRO format -> Uploaded in S3 -> Calls a COPY command on the Redshift Cluster. The problem is, COPY does execute howe...
almost 3 years ago in Connectivity 0 Delivered