A generic video summary is an abridged version of a video that conveys the whole story and features the most important scenes. Yet the importance of scenes in a video is often subjective, and users should have the option of customizing the summary by using natural language to specify what is important to them. Further, existing models for fully automatic generic summarization have not exploited available language models, which can serve as an effective prior for saliency. This work introduces CLIP-It, a single framework for addressing both generic and query-focused video summarization, typically approached separately in the literature. We propose a language-guided multimodal transformer that learns to score frames in a video based on their importance relative to one another and their correlation with a user-defined query (for query-focused summarization) or an automatically generated dense video caption (for generic video summarization). Our model can be extended to the unsupervised setting by training without ground-truth supervision. We outperform baselines and prior work by a significant margin on both standard video summarization datasets (TVSum and SumMe) and a query-focused video summarization dataset (QFVS). Particularly, we achieve large improvements in the transfer setting, attesting to our method's strong generalization capabilities.
翻译:通用视频摘要是一个缩略版的视频,它传达了整个故事和最重要的场景。但视频中场景的重要性往往是主观性的,用户应该选择通过使用自然语言对摘要进行定制,以具体说明哪些内容对他们很重要。此外,现有的完全自动通用汇总模型没有利用现有的语言模型,这些模型可以作为显著性的有效前奏。这项工作引入了CLIP-IT,这是一个处理通用和以查询为重点的视频摘要化的单一框架,通常在文献中单独处理。我们建议了一种语言引导的多式联运变压器,该变压器学习在视频中进行评分框架,其依据是它们彼此之间的重要性以及它们与用户定义的查询(为以查询为焦点的汇总)或自动生成的密集视频说明(为通用视频摘要化)。我们的模型可以推广到通过没有地标监督的培训而不受监督的设置。我们在标准视频摘要数据集(TVSum和SumMUM)上都优于基准和先前工作,在高调的图像转换能力上都能够实现高压的数据转换。