-
Notifications
You must be signed in to change notification settings - Fork 124
fix(watsonx): use prompt-based JSON parsing instead of response_format #89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
haroldship
wants to merge
4
commits into
main
Choose a base branch
from
fix/watsonx-empty-response-format
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
2b1e4e5
fix(watsonx): use prompt-based JSON parsing instead of response_format
haroldship d327680
fix(watsonx): correct root-cause comment — / is the trigger, not vllm…
haroldship 196f509
empty commit to kick off CI
haroldship 0dfad03
fix(watsonx): scope prompt-based JSON parsing to $defs/$ref schemas
haroldship File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't keep
function_callingon the known-broken watsonx path.The new inline note says watsonx
function_callingreturns notool_calls, but Line 167 still routes that mode throughllm.with_structured_output(...). Any caller that explicitly selectswx_json_mode="function_calling"will keep seeing the old failure. Either send that mode through the same prompt+parser fallback or reject it with a clear error.🤖 Prompt for AI Agents
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Valid point. However, no caller in the codebase explicitly passes
wx_json_mode="function_calling"for watsonx today — the default is"response_format"and the only explicit overrides are"no_format"(for code_agent, reflection, etc.).The
function_callingandjson_modebranches at line 167 are pre-existing code that was not modified by this PR. They would only be hit if a caller explicitly opts in.That said, if someone does explicitly pass
function_callingfor watsonx, they'd hit the same vLLM issue. We can address that in a follow-up if needed, but changing pre-existing untouched branches is out of scope for this fix.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@haroldship, that's a fair and reasonable scoping decision — the current PR is a focused fix for theresponse_formatempty-content bug, and touching untouched pre-existing branches would expand its scope unnecessarily.Would you like me to open a follow-up GitHub issue to track the risk that
wx_json_mode="function_calling"(and"json_mode") onChatWatsonxwill hit the same vLLM guided-decoding failure if a caller ever opts in? That way it's documented and won't be forgotten.🧠 Learnings used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes open the issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
Length of output: 52
@haroldship, done! Follow-up issue #90 has been opened and assigned to you:👉 #90
It documents the latent risk that
wx_json_mode="function_calling"and"json_mode"onChatWatsonxwill silently hit the same vLLM guided-decoding failure, links back to this PR and issue#88, and proposes the two remediation options (prompt+parser fallback or an explicit error). The note to revisit once IBM upgrades to vLLM 0.8.2+ is included as well.🧠 Learnings used