Video-language modeling has attracted much attention with the rapid growth of web videos. Most existing methods assume that the video frames and text description are semantically correlated, and focus on video-language modeling at video level. However, this hypothesis often fails for two reasons: (1) With the rich semantics of video contents, it is difficult to cover all frames with a single video-level description; (2) A raw video typically has noisy/meaningless information (e.g., scenery shot, transition or teaser). Although a number of recent works deploy attention mechanism to alleviate this problem, the irrelevant/noisy information still makes it very difficult to address. To overcome such challenge, we thus propose an efficient and effective model, termed Language-Guided Denoising Network (LGDN), for video-language modeling. Different from most existing methods that utilize all extracted video frames, LGDN dynamically filters out the misaligned or redundant frames under the language supervision and obtains only 2--4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that our LGDN outperforms the state-of-the-arts by large margins. We also provide detailed ablation study to reveal the critical importance of solving the noise issue, in hope of inspiring future video-language work.
翻译:随着网络视频的迅速增长,视频模型的制作引起了人们的极大关注。大多数现有方法都假定视频框架和文本描述具有音义关联性,侧重于视频层面的视频语言模型。然而,这一假设往往由于两个原因而失败:(1)由于视频内容内容的丰富的语义,很难以单一视频级别的描述覆盖所有框架;(2) 原始视频通常具有吵杂/无意义的信息(例如,现场拍摄、过渡或挑逗)。虽然最近的一些工作为缓解这一问题部署了关注机制,但无关/噪音信息仍然很难解决。为了克服这一挑战,我们因此提出了一个高效和有效的模型,即语言指南Denoising网络(LGDN),用于视频语言模型。不同于大多数利用现有方法,即利用所有提取的视频框架,LGDN动态过滤错误或冗余框架,在语言监管下只获得2-4个突出框架,用于跨模式象征性调整。在五个公共数据集上进行的广泛实验显示,我们的LGDN图像模型展示了未来重要程度,我们通过大比例展示了我们未来重要程度的图像研究,我们还展示了未来重要程度。