您好,我有两个问题: 1、readme中只说了要下载dreamvideo的模型,但是还需要两个模型,一个是open_clip_pytorch_model.bin,另一个是v2-1_512-ema-pruned.ckpt,readme中并没有指明这两个模型要下载那个,open_clip_pytorch_model.bin我下载的是iic/text-to-video-synthesis下的open_clip_pytorch_model.bin;v2-1_512-ema-pruned.ckpt我下载的是stabilityai/stable-diffusion-2-1-base下的v2-1_512-ema-pruned.ckpt,请问是否正确? 2、按照readme中教程python inference.py --cfg configs/dreamvideo/infer/examples/joint_dog2_carTurn.yaml 生成的两个视频如下 https://github.com/user-attachments/assets/fa11f41a-e2ff-4c71-83da-7a3ba8023cf9 https://github.com/user-attachments/assets/02ceda57-0bc2-4a10-8a6b-0183a9e71160 python inference.py --cfg configs/dreamvideo/infer/examples/joint_dog2_playingGuitar.yaml生成的3个视频如下 https://github.com/user-attachments/assets/041ec702-32f0-4250-9dd4-07c5925fa454 https://github.com/user-attachments/assets/1f8dab25-56ae-480b-a5e7-acf9c6a3753e https://github.com/user-attachments/assets/5ee63486-8fec-4fa7-b4e1-092ebd6251cc 请问这是怎么回事,期待您的回复
您好,我有两个问题:
1、readme中只说了要下载dreamvideo的模型,但是还需要两个模型,一个是open_clip_pytorch_model.bin,另一个是v2-1_512-ema-pruned.ckpt,readme中并没有指明这两个模型要下载那个,open_clip_pytorch_model.bin我下载的是iic/text-to-video-synthesis下的open_clip_pytorch_model.bin;v2-1_512-ema-pruned.ckpt我下载的是stabilityai/stable-diffusion-2-1-base下的v2-1_512-ema-pruned.ckpt,请问是否正确?
2、按照readme中教程python inference.py --cfg configs/dreamvideo/infer/examples/joint_dog2_carTurn.yaml 生成的两个视频如下
a_._running_on_the_road_8888_0.mp4
a_._running_on_the_beach_8888_1.mp4
python inference.py --cfg configs/dreamvideo/infer/examples/joint_dog2_playingGuitar.yaml生成的3个视频如下
a_._is_playing_guitar_8888_1.mp4
a_._is_playing_guitar_on_Mars_8888_0.mp4
https://github.com/user-attachments/assets/5ee63486-8fec-4fa7-b4e1-092ebd6251cc
请问这是怎么回事,期待您的回复