Recent studies have identified Direct Preference Optimization (DPO) as an efficient and reward-free approach to improving video generation quality. However, existing methods largely follow image-domain paradigms and are mainly developed on small-scale models (approximately 2B parameters), limiting their ability to address the unique challenges of video tasks, such as costly data construction, unstable training, and heavy memory consumption. To overcome these limitations, we introduce a GT-Pair that automatically builds high-quality preference pairs by using real videos as positives and model-generated videos as negatives, eliminating the need for any external annotation. We further present Reg-DPO, which incorporates the SFT loss as a regularization term into the DPO loss to enhance training stability and generation fidelity. Additionally, by combining the FSDP framework with multiple memory optimization techniques, our approach achieves nearly three times higher training capacity than using FSDP alone. Extensive experiments on both I2V and T2V tasks across multiple datasets demonstrate that our method consistently outperforms existing approaches, delivering superior video generation quality.
翻译:近期研究表明,直接偏好优化(DPO)是一种无需奖励函数且高效提升视频生成质量的方法。然而,现有方法主要遵循图像领域的范式,且多基于小规模模型(约20亿参数)开发,难以应对视频任务特有的挑战,如数据构建成本高、训练不稳定及内存消耗大等。为克服这些限制,我们提出GT-Pair方法,通过将真实视频作为正样本、模型生成视频作为负样本,自动构建高质量偏好对,无需任何外部标注。我们进一步提出Reg-DPO方法,将监督微调(SFT)损失作为正则化项融入DPO损失中,以增强训练稳定性与生成保真度。此外,通过将完全分片数据并行(FSDP)框架与多种内存优化技术结合,我们的方法实现了比单独使用FSDP时近三倍的训练容量提升。在多个数据集上对图像到视频(I2V)和文本到视频(T2V)任务进行的广泛实验表明,本方法持续优于现有方案,能生成更高质量的视频内容。