Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Clear

Search results: watsonx Code Assistant (general)

Showing 8 of 2084

Allow user to specify GPU node where LLM gets deployed during installation

Currently there is no way to determine what GPU or what node the LLM uses or gets deployed to. We need a mechanism by which we can specify the GPU node where the LLM gets deployed to. Since WCA is using the SYOM framework to serve the ibm-granite-...

Raise learning of large codebases to be a dominant requirement

As part of Linux kernel team, I have to deal with a very large codebase. Booting up with this code base is not trivial and takes a very long time. WCA team should consider the learning task a highly important feature. This can save thousands of ma...
5 months ago in watsonx Code Assistant Ideas / watsonx Code Assistant (general) 3 Functionality already exists

Enable WCA to read from multiple repos and help in referencing the files across repos

When WCA is able to read multiple repos, it becomes useful for the user to reference multiple files and leverage the different combinations for code generations. For example, I have a linux code base in one repo . My second repo is a test framewor...

RAG Compiler/static analysis tool output

In order to support features like /resolve and to improve code gen, providing the output of the compiler and static analysis tools that are relevant to the current context in a retrievel augmented generation fahsion would be helpful (in the case o...
11 months ago in watsonx Code Assistant Ideas / watsonx Code Assistant (general) 1 Future consideration

Create a map/flow chart of programs based on what you might want to grep, like a structure, function, header, or other piece of text

A huge swathe of time is spent doing greps, opening files, looking for references when trying to identify code changes that need to be made, files that need to be rebuilt, or in trying to architect a new solution. To save time, would be wonderful ...
8 months ago in watsonx Code Assistant Ideas / watsonx Code Assistant (general) 0 Future consideration

Track % of code generated in yaml file - automatically measuring WCA adoption

NOTE: This relates to another of my ideas, using (G) as an identifier for generated content: https://ideas.ibm.com/ideas/WCAST-I-297 Explanation: With a short (G) identifier for generated code, there's no indication which model was used or whether...

(G) for generated like © for copyright

Simply put, instead of: // Assisted by watsonx Code Assistant // Code generated by WCA@IBM in this programming language is not approved for use in IBM product development. def some_generated_function(data): return process(data) This: # (G) ibm-gra...
9 months ago in watsonx Code Assistant Ideas / watsonx Code Assistant (general) 0 Not under consideration

Add hotkeys to interact with WCA context input

Add hotkeys to: Use selected text as context in a fresh chat Use selected context in the current chat Focus the chat text input Would really improve the usability as it wouldn't force your hands off the keyboard, it could also potentially push peo...
over 1 year ago in watsonx Code Assistant Ideas / watsonx Code Assistant (general) 0 Not under consideration