Despite the fact that text-to-video (TTV) model has recently achieved remarkable success, there have been few approaches on TTV for its extension to video editing. Motivated by approaches on TTV models adapting from diffusion-based text-to-image (TTI) models, we suggest the video editing framework given only a pretrained TTI model and a single <text, video> pair, which we term Edit-A-Video. The framework consists of two stages: (1) inflating the 2D model into the 3D model by appending temporal modules and tuning on the source video (2) inverting the source video into the noise and editing with target text prompt and attention map injection. Each stage enables the temporal modeling and preservation of semantic attributes of the source video. One of the key challenges for video editing include a background inconsistency problem, where the regions not included for the edit suffer from undesirable and inconsistent temporal alterations. To mitigate this issue, we also introduce a novel mask blending method, termed as sparse-causal blending (SC Blending). We improve previous mask blending methods to reflect the temporal consistency so that the area where the editing is applied exhibits smooth transition while also achieving spatio-temporal consistency of the unedited regions. We present extensive experimental results over various types of text and videos, and demonstrate the superiority of the proposed method compared to baselines in terms of background consistency, text alignment, and video editing quality.
翻译:尽管文本到视频模式(TTV)最近取得了显著的成功,但TTV上几乎没有什么推广到视频编辑的方法。在TTV模式中,根据基于传播的文本到图像(TTI)模式,我们建议视频编辑框架只给一个事先经过培训的TTI模式和单一的<文本,视频+配对,我们称之为Edit-A-Video。这个框架包括两个阶段:(1) 将2D模式扩大为3D模式,同时附上时间模块,调整源视频(2) 将源视频转换为噪音和编辑,同时配上目标文本快速和注意的地图。每个阶段都能够为源视频视频的语义属性提供时间模型和保存。视频编辑所面临的一个主要挑战包括背景不一致问题,没有列入编辑的区域遭受不可取和不一致的时间变换。为了缓解这一问题,我们还采用了一种新式的掩码混合方法,称为稀疏混合(SC Blending) 。我们改进了以前的掩码混合方法,以反映当前视频质量的一致性,从而平稳地展示了不同类型编辑结果。</s>