Skip to content

[trainer] feat: support one step off policy for diffusion model#55

Draft
chenyingshu wants to merge 5 commits intozhtmike:verl-omnifrom
chenyingshu:one-step-off-async
Draft

[trainer] feat: support one step off policy for diffusion model#55
chenyingshu wants to merge 5 commits intozhtmike:verl-omnifrom
chenyingshu:one-step-off-async

Conversation

@chenyingshu
Copy link
Copy Markdown

@chenyingshu chenyingshu commented Mar 20, 2026

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Support one-step-off async policy training for diffusion model with FlowGRPO algorithm (FSDP2).

Main changes:

  • Created new trainer for it.
  • Use nccl backend, change some logics when updating weights.
  • Support async reward loop

Progress:

  • Passed UT (w/ FSDP2)
  • e2e training validation, check performance: examples/flowgrpo_trainer/run_flowgrpo_one_step_off.sh
  • Code cleaning and refactoring
  • Next PR for fully-async policy

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

See unit test in tests/special_e2e/run_one_step_off_policy.sh. Run the test by:

CUDA_VISIBLE_DEVICES=0,1,2 NUM_GPUS=2 MODEL_ID=tiny-random/Qwen-Image adv_estimator=flow_grpo n_gpu_rollout=1 bash tests/special_e2e/run_one_step_off_policy.sh

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

  1. Change configs:
    Major changes include using 'nccl' backend for separate trainer and actor, #gpus for trainer and rollout. E.g.,
actor_rollou t_ref.hybrid_engine=False
actor_rollout_ref.rollout.checkpoint_engine.backend=nccl

trainer.nnodes=1
trainer.n_gpus_per_node=${n_gpus_training}
rollout.nnodes=1
rollout.n_gpus_per_node=${n_gpus_rollout}
  1. Run one-step-off training:
python3 -m verl.experimental.one_step_off_policy.main_ppo \
    --config-path=config \
    --config-name='one_step_off_ppo_diffusion_trainer.yaml' \
    algorithm.adv_estimator=flow_grpo \
    actor_rollout_ref.actor.strategy=fsdp2 \
    actor_rollout_ref.rollout.checkpoint_engine.backend=nccl \
    # actor and rollout are placed separately
    actor_rollout_ref.hybrid_engine=False \
    # actor and rollout resource
    trainer.nnodes=1 \
    trainer.n_gpus_per_node=4 \
    rollout.nnodes=1 \
    rollout.n_gpus_per_node=4

or run example script:

bash examples/flowgrpo_trainer/run_flowgrpo_one_step_off.sh

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

  • Trainer:
    • New class SeparateRayFlowGRPOTrainer in verl/experimental/separation/ray_diffusion_trainer.py.
      Note that it is expected to be reused for fully-async policy, which will be implemented in next PR.
    • New class OneStepOffRayFlowGRPOTrainer in verl/experimental/one_step_off_policy/ray_diffusion_trainer.py
    • Added condition to use OneStepOffRayFlowGRPOTrainer and standalone reward loop in verl/experimental/one_step_off_policy/main_ppo.py
  • Config: New config verl/experimental/one_step_off_policy/config/one_step_off_ppo_diffusion_trainer.yaml
  • Weight Update: Some minor changes were made to apply lora for updating weights of the separate rollout. The changes mainly occurred due to nccl backend usage:
    • Return peft_config and base_sync_done in ActorRolloutRefWorker.update_weights() in verl/workers/engine_workers.py
    • Pass peft_config and base_sync_done to CheckpointEngineWorker.update_weights() in verl/checkpoint_engine/base.py.
    • Modified collect_lora_params in verl/utils/fsdp_utils.py since when using nccl backend, neither layered_summon nor base_sync_done is passed.
      Note: (susan) I am not sure here.
  • UT: Added unit test in .github/workflows/e2e_one_step_off_policy.yml, and tests/special_e2e/run_one_step_off_policy.sh.
  • Doc: Updated doc in docs/advance/one_step_off.md and verl/experimental/one_step_off_policy/README.md

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

@chenyingshu chenyingshu changed the title [one_step_off] feat: support one step off policy for diffusion model [trainer] feat: support one step off policy for diffusion model Mar 20, 2026
@zhtmike zhtmike force-pushed the verl-omni branch 5 times, most recently from eb393e7 to 0cbbe96 Compare April 2, 2026 01:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant