A storyboard is a roadmap for video creation which consists of shot-by-shot images to visualize key plots in a text synopsis. Creating video storyboards however remains challenging which not only requires association between high-level texts and images, but also demands for long-term reasoning to make transitions smooth across shots. In this paper, we propose a new task called Text synopsis to Video Storyboard (TeViS) which aims to retrieve an ordered sequence of images to visualize the text synopsis. We construct a MovieNet-TeViS benchmark based on the public MovieNet dataset. It contains 10K text synopses each paired with keyframes that are manually selected from corresponding movies by considering both relevance and cinematic coherence. We also present an encoder-decoder baseline for the task. The model uses a pretrained vision-and-language model to improve high-level text-image matching. To improve coherence in long-term shots, we further propose to pre-train the decoder on large-scale movie frames without text. Experimental results demonstrate that our proposed model significantly outperforms other models to create text-relevant and coherent storyboards. Nevertheless, there is still a large gap compared to human performance suggesting room for promising future work.
翻译:故事板是一个视频创建的路线图,由拍摄图像组成,在文本概要中直观地将关键图纸呈现出来。但是,创建视频故事板仍然具有挑战性,不仅需要高层次文本和图像之间的联系,还需要长期推理,以使跨镜头的过渡顺利。在本文中,我们提议了一个新的任务,名为“视频故事板的文本概要”(TeVis),目的是检索一个有条不紊的图像序列,以视觉文本概要。我们根据公共电影网数据集构建了一个MeopNet-TeVis基准。它包含10K文本合成器,每个配对齐了从相应电影中手工选择的关键框架,这些框架既考虑到相关性,又考虑到电影的一致性。我们还为任务提出了一个编码解码器基准。模型使用预先训练的视觉和语言模型来改进高层次文本图像图像匹配。为了提高长期图片的连贯性,我们进一步提议在没有文本的大型电影框上对解码器进行预设。实验结果显示,我们提议的模型仍然大大超越了从相应的电影框中手工选择的其他模型,从而创建了与文本相关的大模型。