Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Search results: Connectivity

Showing 22

Snowflake Connector Error handling

1) We encountered an issue while using the insert or update option. The continue on error option does not work and the job fails if it encounters bad data. We opened a case with IBM. There is no reject capture mechanism. Workaround: Two step solut...
almost 5 years ago in Connectivity 6 Not under consideration
6 MERGED

Azure Sharepoint connector to import CSV files for loading into other data sources

Merged
We have an system internally, DataStage job (nearly 50 Datastage jobs) should read the csv file from Azure using Datastage connector stage and load the data into SQL Server.CSV file is attached for reference. These files are coming from internal a...
over 5 years ago in Connectivity 0 Not under consideration

DataStage - Azure [Datalake] Storage Connector - Support parallelism parquet format

From PMR it has been confirmed that: Parallel Read/write - CSV and Delimited only support parallel write operations, rest of all file formats do not support parallel read/write. Parquet format doesn't support parallel read/write It is not document...
almost 5 years ago in Connectivity 0 Not under consideration

Support for endpoints in S3 connector

We need to write files into our company amazon s3 bucket with Datastage 11.5.0.2. We cannot tune the S3 Connector component with our specific endpoint, as we could using "endpoint" parameter with aws cli.
over 5 years ago in Connectivity 2 Not under consideration

Reading data from Amazon Simple Queue Service (SQS)

Need Datastage integration with Amazon SQS (simple queue service) to communicate JASON file
almost 8 years ago in Connectivity 0 Not under consideration

Introduction of SSO passthought of ACL lists. For example, SSO onto CP4D on Azure will create automatic data connections down onto the Azure FS in line with user Azure RBAC, ACL's and CP4D RBAC. This is already a key feature of Databricks and DataIQ.

Allows users to access their files in CP4D without the need to hand-crank connectuon strings.
over 2 years ago in Connectivity 0 Not under consideration

Decimal Data Data type support for DataStage File Connector

Currently DataStage 11.5 does not support Decimal Data type with its File Connector. This RFE is to request the File Connector decimal data type support.
almost 8 years ago in Connectivity 2 Not under consideration

Allow File Connector to manage Hive tables properly when they are located in HDFS Transparent Data Encryption Zones

When the File Connector is configured to write files to an encrypted zone (TDE) within HDFS, with "Create Hive Table" set to Yes and "Drop existing table" set to Yes, jobs will fail if the table already exists. This is because Hive requires the PU...
about 8 years ago in Connectivity 0 Not under consideration

DataStage - File Connector - Add support of NULL type when reading PARQUET file

We use DataStage for most of the ETL jobs. We want to read PARQUET file (which contains nullable columns) and do further processing on it. But File Connector does not support NULL type when reading PARQUET file.
over 7 years ago in Connectivity 0 Not under consideration

Fileconnector and Avro

File Connector to generate its own Avro schema, and at run time. Anything that moves us closer to that, and further away from a) hand coding Avro schemas b) physically copying them to the edge node, would probably be step in the right direction
about 8 years ago in Connectivity 0 Not under consideration