Conversation
Co-authored-by: Dmitry Tokarev <dtokarev@nvidia.com>
whoisj
previously approved these changes
Oct 30, 2025
whoisj
previously approved these changes
Oct 30, 2025
…to yinggeh/tri-49-request-for-openai-compatible-api-endpoints-for-triton
Contributor
Author
|
Rebase to r25.10 to run pipeline. |
pskiran1
reviewed
Nov 3, 2025
… yinggeh/tri-49-request-for-openai-compatible-api-endpoints-for-triton
pskiran1
previously approved these changes
Nov 5, 2025
Member
pskiran1
left a comment
There was a problem hiding this comment.
LGTM! Thank you for adding this feature.
whoisj
previously approved these changes
Nov 5, 2025
Contributor
whoisj
left a comment
There was a problem hiding this comment.
overall, this LGTM.
left one suggestion, but I approve of these changes as they are.
| backend = self.backend | ||
|
|
||
| # Request conversion from OpenAI format to backend-specific format | ||
| if backend == "vllm": |
Contributor
There was a problem hiding this comment.
wouldn't this be safer as below?
if backend == 'trtllm':
# do something
elif backend == 'vllm':
# do something else
else:
raise ValueError(f'Unknown backend "{backend}" provided.')
Contributor
Author
There was a problem hiding this comment.
server/python/openai/openai_frontend/engine/triton_engine.py
Lines 475 to 499 in b48425e
backend can be ensemble.
Contributor
There was a problem hiding this comment.
makes sense.
when backend == "ensemble" then we hit this code:
if request_type == RequestKind.GENERATION:
return _create_trtllm_generate_request
else:
return _create_trtllm_embedding_requestis that desirable?
also, adding the switch-like statement future-proofs the function.
Contributor
Author
There was a problem hiding this comment.
# Use TRT-LLM format as default for everything else. This could be
# an ensemble, a python or BLS model, a TRT-LLM backend model, etc.
whoisj
approved these changes
Nov 6, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does the PR do?
/v1/embeddingsinference request for OpenAI API frontend in vLLM containerChecklist
<commit_type>: <Title>Commit Type:
Check the conventional commit type
box here and add the label to the github PR.
Related PRs:
triton-inference-server/vllm_backend#104
Where should the reviewer start?
Test plan:
Caveats:
Background
Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)