Hi, and thanks for releasing CoDAE!
In the README under “Inference & Merging”, it says:
All inferences are generated using VLLM_infer in the appropriate scripts (cot_eval_idk.sh, jailbreak_idk.sh, judge_A_idk.sh, etc.)
However, I can’t find any implementation of this in the repository — in particular, there is no scripts/vllm_infer.py (or any other vllm_infer file) available.
Could you please add the missing scripts/vllm_infer.py (or the corresponding script you used for VLLM inference)
Having this file (or equivalent documentation) would make it much easier to fully reproduce your experimental setup.
Thanks!
Hi, and thanks for releasing CoDAE!
In the README under “Inference & Merging”, it says:
All inferences are generated using VLLM_infer in the appropriate scripts (cot_eval_idk.sh, jailbreak_idk.sh, judge_A_idk.sh, etc.)
However, I can’t find any implementation of this in the repository — in particular, there is no scripts/vllm_infer.py (or any other vllm_infer file) available.
Could you please add the missing scripts/vllm_infer.py (or the corresponding script you used for VLLM inference)
Having this file (or equivalent documentation) would make it much easier to fully reproduce your experimental setup.
Thanks!