Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Search results: StreamSets

Showing 46 of 5232

Snowflake support in JDBC Query Consumer

There are many cases where customers are utilizing Snowflake as one of their data sinks. This increased utilization also creates situations where the customer will also need to pull data from Snowflake, utilizing a custom query which could join ta...
9 months ago in StreamSets 1 Not under consideration

Enable reading NUMBER(1) columns in New Oracle CDC stage as INTEGER rather than BYTE

There is an existing CI in the old system for this; adding it to the Ideas portal for customer visibility. StreamSets documentation explains that we convert Oracle's Number type to SDC data types based on precision and scale, converting NUMBER fie...
10 months ago in StreamSets 0 Under review

New Control Hub role for extracting security audit data

Currently, these SCH API endpoints are available only to users in Organisation Administrator role: /security/rest/v1/organization/{orgId}/user/{userId}/listLogins , /security/rest/v1/metrics/dpmsupport.dp/actionAudits and /security/rest/v1/metrics...
7 months ago in StreamSets 0 Under review

MERGE statement should respect table DDL's default values and specify column in Snowflake stage

When processing data with the Snowflake destination, it does not respect the target table's DDL for default values. For example, if the table has a column with the default value set to CURRENT_TIMESTAMP in Snowflake, the stage will just insert a n...
7 months ago in StreamSets 1 Planned for future release

Snowflake destination events

Upon writing the data to snowflake table, we don't get any events which means we're unable to trigger any further action after writing to snowflake table. Currently we're facing two problems because of this: We need to write data to two snowflake ...
11 months ago in StreamSets 0 Future consideration

Adding resource files to Deployable Transformer for Snowflake engines by introducing External Resource for Platform Users

In the same way that StreamSets Data Collector (SDC) allows for the upload of external resources via the user interface, there should be a similar option available for the Deployed Transformer for Snowflake. Use Case 1:According to our organizatio...
8 months ago in StreamSets 0 Under review

Add a support for processing CDC data for "uuid, json, jsonb" data types in PostgreSQL.

Product: StreamSets Data Collector. Stage: PostgreSQL CDC Client Please add a support for "PostgreSQL CDC Client" to process the data with "uuid, json, jsonb" data types.
4 months ago in StreamSets 2

Allow the Schema Generator to be able to use Complex Data Types for Parquet and Avro

The schema generator currently is only able to generate basic schema. For example a map with mixed data types is considered complex and will cause an error: { "answers":{ "field1": "somestring", "field2": 123 } } SCHEMA_GEN_0007 - Map '/answer...
9 months ago in StreamSets 0

Ability to Enable and Change Logging Levels and Log Destinations/Formats for Pipelines and Stages

It's possible now to add log4j appenders to a Data Collector's sdc.properties file to set or change logging levels, writing to the sdc.log file, but the file is verbose and difficult to interpret. It would be great to be able to do this within the...
5 months ago in StreamSets 0 Under review

Support pulling data from a Databricks source via a branded origin stage

Johnson & Johnson, a strategic account, is asking for a branded origin stage for Databricks. Currently the only way to ingest data from Databricks is via the JDBC consumer origins, but there are limitations that are causing them issues. Full c...
6 months ago in StreamSets 0 Under review