Hi, thank you for your fantastic work! I have two questions regarding the training setup
- Training Time per Episode
Could you share the approximate training time per episode (or total training time cost of all 10 episode) on your hardware setup (e.g. NX8 A100)?
- [Maybe Better] Multi-Node/Multi-GPU Training Configuration
I noticed different parallel training parameters in the notion documentation and examples scripts, and I’d like to know the optimal configuration for large-scale training. Could you provide some guidance on how to set the key parameters (e.g. actor_num_nodes, vllm_num_engines, vllm_tensor_parallel_size, vllm_gpu_memory_utilization and (micro_)?(train|rollout)_batch_size) for best performance on specific hardware setup?
Thank you in advance for your time and insights! Looking forward to your reply.
Best regards.
Hi, thank you for your fantastic work! I have two questions regarding the training setup
Could you share the approximate training time per episode (or total training time cost of all 10 episode) on your hardware setup (e.g.
NX8 A100)?I noticed different parallel training parameters in the
notion documentationandexamples scripts, and I’d like to know the optimal configuration for large-scale training. Could you provide some guidance on how to set the key parameters (e.g.actor_num_nodes,vllm_num_engines,vllm_tensor_parallel_size,vllm_gpu_memory_utilizationand(micro_)?(train|rollout)_batch_size) for best performance on specific hardware setup?Thank you in advance for your time and insights! Looking forward to your reply.
Best regards.