Conversation
Signed-off-by: SamitHuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Add rollout backend client and test qwen2.5-0.5b non-colocate training
Signed-off-by: samithuang <285365963@qq.com>
Eliminate intermediate CPU tensors
Reorder weight synchronization support for colocate and non-colocate scenarios in the goal plan.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request outlines and partially implements a significant architectural enhancement to Slime, enabling the use of vLLM as an alternative rollout backend for Generative Reinforcement Learning with Policy Optimization (GRPO) training. The core objective is to maintain a stable training interface while introducing flexibility in the inference backend. This involves creating a clear abstraction layer for backend communication, managing the vLLM server lifecycle within the Ray ecosystem, and implementing a sophisticated weight synchronization strategy that adapts to different deployment scenarios, such as colocate and non-colocate GPU setups. The initial phase of this plan has been successfully validated, demonstrating the feasibility and stability of vLLM integration. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant architectural change by adding vLLM as a new rollout backend, laying a solid foundation for supporting multiple backends with a well-designed abstraction layer and necessary modifications for weight synchronization and server management. However, a security audit identified high-severity concerns regarding the exposure of unauthenticated control endpoints on all network interfaces and medium-severity risks related to the automatic execution of remote code from model repositories. It is recommended to restrict network exposure and disable trust_remote_code by default. Additionally, there are critical inconsistencies with RFCs, especially concerning the colocate mode for vLLM, and opportunities to improve maintainability and portability in scripts and documentation.
| use_tensor_update = self.args.colocate and getattr(self.args, "rollout_backend", "sglang") != "vllm" | ||
| update_weight_cls = UpdateWeightFromTensor if use_tensor_update else UpdateWeightFromDistributed |
There was a problem hiding this comment.
The logic here seems to disable the use of UpdateWeightFromTensor for the vLLM backend, even when self.args.colocate is true. This forces the use of UpdateWeightFromDistributed for vLLM in all cases. This contradicts the RFCs (rfc-vllm-rollout-backend-en.md and rfc-vllm-rollout-backend.md), which state that colocate mode for vLLM should use a more efficient CUDA IPC-based weight transfer, typically handled within UpdateWeightFromTensor. This change effectively disables the performance optimization for colocate mode with vLLM.
| "--tensor-parallel-size", str(tp), | ||
| "--port", str(self.server_port), | ||
| "--host", "0.0.0.0", | ||
| "--weight-transfer-config", '{"backend": "nccl"}', |
There was a problem hiding this comment.
The vLLM server is configured to listen on all network interfaces (0.0.0.0) without any authentication, exposing sensitive control endpoints. This poses a high-severity risk as an attacker could manipulate model weights or disrupt the training process. It is recommended to bind the server to 127.0.0.1 or implement authentication if remote access is needed. Additionally, the weight-transfer-config is hardcoded to nccl, preventing the use of ipc for zero-copy weight transfer in colocate mode, which is a performance optimization outlined in the RFC.
weight_transfer_backend = "ipc" if self.args.colocate else "nccl"
cmd.extend(["--weight-transfer-config", f'{{"backend": "{weight_transfer_backend}"}}'])| ``` | ||
|
|
||
| ``` | ||
| docker run -itd --gpus all --ipc=host --shm-size=128g --net=host --privileged=true --restart=always \ |
There was a problem hiding this comment.
| "--host", "0.0.0.0", | ||
| "--weight-transfer-config", '{"backend": "nccl"}', | ||
| "--seed", str(seed), | ||
| "--trust-remote-code", |
There was a problem hiding this comment.
The application enables trust_remote_code=True when launching the vLLM server. This setting allows the execution of arbitrary Python code from the model repository. An attacker who can influence the model path (e.g., via the --hf-checkpoint or --vllm-model CLI arguments) can achieve remote code execution by pointing the application to a malicious repository.
Recommendation: Disable trust_remote_code by default. If it is required for certain models, provide a mechanism for the user to explicitly enable it with a clear security warning.
| if not output_token_ids and text: | ||
| logger.warning("vLLM response missing token_ids, falling back to tokenizer") | ||
| from slime.utils.processing_utils import load_tokenizer | ||
| tokenizer = load_tokenizer(self.args.hf_checkpoint, trust_remote_code=True) |
There was a problem hiding this comment.
The application enables trust_remote_code=True when loading the tokenizer in the VLLMClient. This allows the execution of arbitrary Python code from the model repository. An attacker who can influence the model path can achieve remote code execution by pointing the application to a malicious repository.
Recommendation: Disable trust_remote_code by default and only enable it if explicitly requested by the user.
| @@ -0,0 +1,360 @@ | |||
| # RFC: Add vLLM as a Rollout Backend in Slime | |||
There was a problem hiding this comment.
This pull request introduces three separate RFC documents (docs/en/advanced/rfc-vllm-rollout-backend.md, rfc-vllm-rollout-backend-en.md, rfc-vllm-rollout-backend.md) covering the same feature. This can lead to confusion and make maintenance difficult. It's recommended to consolidate these into a single, canonical RFC and remove the outdated or redundant versions. For instance, this document appears to be an older draft compared to rfc-vllm-rollout-backend-en.md.
| --hf-checkpoint /root/Qwen2.5-0.5B-Instruct/ | ||
| --ref-load /root/Qwen2.5-0.5B-Instruct_torch_dist/ | ||
| ) | ||
|
|
||
| ROLLOUT_ARGS=( | ||
| --prompt-data /root/gsm8k/train.parquet |
There was a problem hiding this comment.
| --hf-checkpoint /root/Qwen2.5-0.5B-Instruct/ | ||
| --ref-load /root/Qwen2.5-0.5B-Instruct_torch_dist/ | ||
| ) | ||
|
|
||
| # num-rollout:100 | ||
| ROLLOUT_ARGS=( | ||
| --prompt-data /root/gsm8k/train.parquet |
There was a problem hiding this comment.
This script contains hardcoded absolute paths for model checkpoints and datasets (e.g., /root/Qwen2.5-0.5B-Instruct/, /root/gsm8k/train.parquet). This is not portable. It would be better to define these paths as variables at the top of the script or pass them as arguments, which would make the script easier to adapt for different setups.
| env["NCCL_P2P_DISABLE"] = "1" | ||
| env.setdefault("NCCL_IB_DISABLE", "1") |
There was a problem hiding this comment.
These NCCL environment variables (NCCL_P2P_DISABLE, NCCL_IB_DISABLE) are set without any explanation. To improve maintainability, it would be beneficial to add comments explaining why these settings are necessary (e.g., for specific hardware configurations or to work around known issues).
# Disable P2P and InfiniBand for compatibility with certain environments
env["NCCL_P2P_DISABLE"] = "1"
env.setdefault("NCCL_IB_DISABLE", "1")| output_token_ids = choice.get("token_ids") or [] | ||
| logprobs_obj = choice.get("logprobs") or {} | ||
| raw_logprobs = logprobs_obj.get("token_logprobs") or [] | ||
| output_token_logprobs = [float(lp) if lp is not None else 0.0 for lp in raw_logprobs] |
There was a problem hiding this comment.
When a logprob value is None, it's being replaced with 0.0. While this is a reasonable default, a log probability of 0.0 corresponds to a probability of 1.0, which might be a valid value. This could lead to ambiguity. Consider using a very small number (e.g., -1e9) or float('-inf') to represent a missing logprob, or add a comment to clarify this behavior and its implications for downstream calculations like KL divergence.
* Draft router design Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com> * Add vllm router Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com> * Add router to script Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com> * Fix gpu memory utilization Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com> * Fix output token ids Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com> * Add more nccl flag Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com> * Fix bug Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com> --------- Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Dev Plan
P0:
Results
Qwen2.5-0.5B GRPO training convergence on GSM8K dataset: