Generating controllable videos conforming to user intentions is an appealing yet challenging topic in computer vision. To enable maneuverable control in line with user intentions, a novel video generation task, named Text-Image-to-Video generation (TI2V), is proposed. With both controllable appearance and motion, TI2V aims at generating videos from a static image and a text description. The key challenges of TI2V task lie both in aligning appearance and motion from different modalities, and in handling uncertainty in text descriptions. To address these challenges, we propose a Motion Anchor-based video GEnerator (MAGE) with an innovative motion anchor (MA) structure to store appearance-motion aligned representation. To model the uncertainty and increase the diversity, it further allows the injection of explicit condition and implicit randomness. Through three-dimensional axial transformers, MA is interacted with given image to generate next frames recursively with satisfying controllability and diversity. Accompanying the new task, we build two new video-text paired datasets based on MNIST and CATER for evaluation. Experiments conducted on these datasets verify the effectiveness of MAGE and show appealing potentials of TI2V task. Source code for model and datasets will be available soon.
翻译:生成符合用户意图的可控视频是计算机愿景中一个既具有吸引力又具有挑战性的议题。为了能够根据用户意图进行可操作的控制,我们提议了一项名为“文本-图像到视频生成”的新视频生成任务。随着可控外观和运动,TI2V旨在从静态图像和文本描述中生成视频。TI2V的主要挑战在于调和不同模式的外观和运动,以及处理文本描述中的不确定性。为了应对这些挑战,我们提议建立一个基于动态锁定的视频Generator(MAGE),配有创新的动作定位(MA)结构,以存储外观-动作匹配代表。为了模拟不确定性和增加多样性,它进一步允许输入明确状态和隐含随机性。通过三维轴式变异器,MA与给定图像互动,以产生满足调控性和多样性的下一个框架。为了应对这些挑战,我们根据MNIST和CATER的评估,建立了两个新的视频配对数据集。为这些数据设置模型的实验将很快进行。关于这些数据设置的模型的实验将展示具有吸引力。