This paper proposes an adaptive compact attention model for few-shot video-to-video translation. Existing works in this domain only use features from pixel-wise attention without considering the correlations among multiple reference images, which leads to heavy computation but limited performance. Therefore, we introduce a novel adaptive compact attention mechanism to efficiently extract contextual features jointly from multiple reference images, of which encoded view-dependent and motion-dependent information can significantly benefit the synthesis of realistic videos. Our core idea is to extract compact basis sets from all the reference images as higher-level representations. To further improve the reliability, in the inference phase, we also propose a novel method based on the Delaunay Triangulation algorithm to automatically select the resourceful references according to the input label. We extensively evaluate our method on a large-scale talking-head video dataset and a human dancing dataset; the experimental results show the superior performance of our method for producing photorealistic and temporally consistent videos, and considerable improvements over the state-of-the-art method.
翻译:本文建议了一个适应性紧凑关注模式, 用于几发视频到视频翻译。 这一领域现有工作仅使用来自像素关注的特征, 而没有考虑到多个参考图像的关联性, 从而导致大量计算但性能有限。 因此, 我们引入了一个创新的适应性紧凑关注机制, 以便从多个参考图像中联合有效提取背景特征, 其中以视觉和运动为依存的编码信息可以大大有利于对现实视频的合成。 我们的核心想法是从所有参考图像中提取紧凑基础集, 将其作为更高层次的演示。 为了进一步提高可靠性, 在推断阶段, 我们还提议了一种基于Delaunay三角算法的新方法, 以便根据输入标签自动选择资源性引用。 我们广泛评价了我们关于大型谈话头版视频数据集和人类舞蹈数据集的方法; 实验结果显示我们制作摄影现实和时间一致视频的方法的优异性表现, 以及对最新方法的显著改进。