Hi, thanks for your great work!
May I ask if there is any plan or timeline to release the training script? Or could you please provide some guidelines on how to implement the training process?
According to the paper, I understand that in the sft dataset, each query contains six reasoning paths (with at least one correct path) concatenated with the final summary. I’m wondering whether the Sequence-Aware Positional Embedding, attention mask and positional ID parts need to be modified in the LLaMA-Factory source code during training?
Hi, thanks for your great work!
May I ask if there is any plan or timeline to release the training script? Or could you please provide some guidelines on how to implement the training process?
According to the paper, I understand that in the sft dataset, each query contains six reasoning paths (with at least one correct path) concatenated with the final summary. I’m wondering whether the Sequence-Aware Positional Embedding, attention mask and positional ID parts need to be modified in the LLaMA-Factory source code during training?