Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Search results: StreamSets

Showing 13 of 2076

Add a validation check to Job Parameters edited using bulk edit mode

This would avoid downstream errors when updating pipeline versions (see screenshot). I filed https://streamsets.atlassian.net/browse/DPM-24656
12 days ago in StreamSets 0 Submitted

Security permission set at group level for template -> objects created -> when pipeline/job template/job Whenever a pipeline/job template/job created in the StreamSets, we can have ACL to allow/restrict others to work on it. But, in case of job template, others cannot see the job instances created on top of it even they are in ACL. So, only owner can see what is happening while processing a file. StreamSets team said, it is a bug on their end. When can we expect this issue fix as only the person who created the template can see the logs.

(Streamsets Group permission set on template newly created objects) Whenever a pipeline/job template/job created in the StreamSets, we can have ACL to allow/restrict others to work on it. But, in case of job template, others cannot see the job ins...
5 months ago in StreamSets 0 Under review

Allow renaming of Kubernetes deployments

CBRE manages their Kubernetes deployments manually through their own yaml files and integrates with Control Hub via Python SDK scripts. They also integrate with Datadog. Because of the dynamic naming of the deployment in the fashion of streamsets-...
about 2 months ago in StreamSets 1 Future consideration

Snowpipe Streaming destination for Snowflake

Current Snowflake destination has performance limitations due to the nature of file-based process of uploading to Snowflake. Since 2023, Snowflake supports Snowpipe Streaming ingestion which doesn't have problems related to file-based upload, has ...
5 months ago in StreamSets 0

Adding resource files to Deployable Transformer for Snowflake engines by introducing External Resource for Platform Users

In the same way that StreamSets Data Collector (SDC) allows for the upload of external resources via the user interface, there should be a similar option available for the Deployed Transformer for Snowflake. Use Case 1:According to our organizatio...
9 months ago in StreamSets 0 Under review

Ability to Enable and Change Logging Levels and Log Destinations/Formats for Pipelines and Stages

It's possible now to add log4j appenders to a Data Collector's sdc.properties file to set or change logging levels, writing to the sdc.log file, but the file is verbose and difficult to interpret. It would be great to be able to do this within the...
5 months ago in StreamSets 0 Future consideration

Snowflake Bulk not able to read from read-only database due to the temporary file format.

When Snowflake Bulk stage tries to read from a read-only database, it fails with the message: "Could not create a temporary file format (SNOWFLAKE_47)" due to the lack of permissions to create the file format object in the source database. This is...
10 months ago in StreamSets 1 Under review

Runtime resource files - support for encryption and decryption at runtime within the pipeline using keys from azure key vault

Humana is successful with using exec function in credential store by fetching encryption keys from Azure key vault. They are looking for similar feature/implementation for resource files, specifically encrypting resource files and decrypting them ...
12 months ago in StreamSets 3 Not under consideration

AWS S3 pipeline ability to use temporary credentials with AWS resources - passing 3 parm where session

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html State Street is starting its cloud movement and wishes to move data from On-Prem -> to AWS S3 State Street Security/internal IT wishes to use 60 min exp Tem...
7 months ago in StreamSets 0 Delivered

Support for having origin to read from Azure ADLS/Blob File Shares

Product - IBM StreamSets | Data Collector Customer have a use case where they need to read data from ADLS File Shares. It could be achieved through Groovy scripting using Groovy origin; However, having a dedicated stage for it would be much helpful.
8 months ago in StreamSets 0 Under review