本仓库演示了使用 CNB “云原生开发”,一键快速体验 HunyuanVideo-I2V 图生视频模型的方法。更详细使用方法,参考模型原版英文版或中文版 README.md 文件
Fork 本仓库, 然后点击 “云原生开发” 进入远程开发环境。在 Terminal 中运行下面命令,执行图生视频.
python3 sample_image2video.py \
--model HYVideo-T/2 \
--prompt "An Asian man with short hair in black tactical uniform and white clothes waves a firework stick." \
--i2v-mode \
--i2v-image-path ./assets/demo/i2v/imgs/0.jpg \
--i2v-resolution 360p \
--i2v-stability \
--infer-steps 20 \
--video-length 129 \
--flow-reverse \
--flow-shift 7.0 \
--seed 0 \
--embedded-cfg-scale 6.0 \
--use-cpu-offload \
--save-path ./results
说明:
| 分辨率 | infer-steps | 耗时 | result 文件 |
|---|---|---|---|
| 360p | 20 | 4mins | (自行运行) |
| 540p | 20 | 30mins | results/'2025-03-31-15:57:09_seed0_An Asian man with short hair in black tactical uniform and white clothes waves a firework stick..mp4' |
| 720p | 50 | 112mins | results/'2025-03-31-13:18:43_seed0_An Asian man with short hair in black tactical uniform and white clothes waves a firework stick..mp4' |
完成。可调整 prompt 和参数获得更好生成效果。其他参数的详细说明和模型使用方式,见 README.md
训练时,使用命令行运行下面命令即可。这里设置成每50个 epoch 保存一次 lora 模型结果到 log_EXP 文件夹下。在 CNB 中实测硬件资源消耗,显存平均使用 70G,GPU 100% 。
sh scripts/run_train_image2video_lora.sh --ckpt-every 50 --epoch 101
训练完成后,使用命令加载刚才训练的 lora 模型并生成视频
python3 sample_image2video.py \ --model HYVideo-T/2 \ --prompt "Two people hugged tightly, In the video, two people are standing apart from each other. They then move closer to each other and begin to hug tightly. The hug is very affectionate, with the two people holding each other tightly and looking into each other's eyes. The interaction is very emotional and heartwarming, with the two people expressing their love and affection for each other." \ --i2v-mode \ --i2v-image-path ./assets/demo/i2v_lora/imgs/embrace.png \ --i2v-resolution 360p \ --i2v-stability \ --infer-steps 20 \ --video-length 129 \ --flow-reverse \ --flow-shift 5.0 \ --embedded-cfg-scale 6.0 \ --seed 0 \ --use-cpu-offload \ --save-path ./results \ --use-lora \ --lora-scale 1.0 \ --lora-path /workspace/log_EXP/0003_HYVideo_T2_换成自己的路径_i2v_lora/checkpoints/global_step100/pytorch_lora_kohaya_weights.safetensors
完成。其他参数的详细说明,见 README.md
如果关注点在 AIGC 视频生成,也可以参考 CNB 样例中的 ComfyUI 原生的混元图生视频工作流