Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Search results: StreamSets

Showing 11 of 2588

Support for List type in Variant datatype column for Snowflake Destination stage

Customer is trying to insert an array json into a Snowflake table with one of the column of Variant datatype. They encountered error "invalid type 'LIST', column type is 'VARIANT' " when pushing array data to Snowflake destination stage.
24 days ago in StreamSets 2 Needs more information

UI Modifications: Job details visible on engine details page and also on logs. Data collector details visible on Job instance page.

The customer wanted following modifications on the Platform UI:1. On the engine details page, under running pipelines section, currently only the pipeline details are shown in UI, such as pipeline name, type, status, last reported, etc. The custom...
about 2 months ago in StreamSets 1 Under review

Snowflake support in JDBC Query Consumer

There are many cases where customers are utilizing Snowflake as one of their data sinks. This increased utilization also creates situations where the customer will also need to pull data from Snowflake, utilizing a custom query which could join ta...
9 months ago in StreamSets 1 Not under consideration

MERGE statement should respect table DDL's default values and specify column in Snowflake stage

When processing data with the Snowflake destination, it does not respect the target table's DDL for default values. For example, if the table has a column with the default value set to CURRENT_TIMESTAMP in Snowflake, the stage will just insert a n...
7 months ago in StreamSets 1 Planned for future release

Snowflake destination events

Upon writing the data to snowflake table, we don't get any events which means we're unable to trigger any further action after writing to snowflake table. Currently we're facing two problems because of this: We need to write data to two snowflake ...
11 months ago in StreamSets 0 Future consideration

Snowflake CDC

Snowflake is central part of our ecosystem and multiple producers and consumer components are exchanging data through it. The lack of CDC support for snowflake table means we'll have to configure a poll at certain duration which will incur cost - ...
11 months ago in StreamSets 0 Not under consideration

Add Option to Truncate Table in Destination Stage Before Inserting Data

In many data transfer scenarios, it is essential to ensure that the destination table is empty before inserting a fresh set of data. Currently, this requires the creation of a separate pipeline to truncate the table, followed by an orchestration p...
7 months ago in StreamSets 0 Under review

Have the ability to store the error records from and HTTP stage or JDBC stage seperately.

This feature request is to have the ability to write the erroneous records from an http stage (client/processor) or jdbc stages directly to a dedicated data source(local fs/table). Currently the failed records along with logs from all stages are w...
10 months ago in StreamSets 0 Under review

Provide an option for Deferred merge method in Snowflake destination

https://quickstarts.snowflake.com/guide/connectors_example_push_based_java/img/e5a0394034661f7.pdf Deferred merge is a pattern whereby you maintain a temporary or hot data table (1) with changed or new records for an interval of time. When that ti...
10 months ago in StreamSets 0 Under review

We need to be able to detect a change to a table in the CDC origin in order to create a DDL

No description provided
about 1 year ago in StreamSets 2 Future consideration