In this paper, we teach machines to understand visuals and natural language by learning the mapping between sentences and noisy video snippets without explicit annotations. Firstly, we define a self-supervised learning framework that captures the cross-modal information. A novel adversarial learning module is then introduced to explicitly handle the noises in the natural videos, where the subtitle sentences are not guaranteed to be strongly corresponded to the video snippets. For training and evaluation, we contribute a new dataset `ApartmenTour' that contains a large number of online videos and subtitles. We carry out experiments on the bidirectional retrieval tasks between sentences and videos, and the results demonstrate that our proposed model achieves the state-of-the-art performance on both retrieval tasks and exceeds several strong baselines. The dataset can be downloaded at https://github.com/zyj-13/WAL.
翻译:在本文中,我们教机器如何理解视觉和自然语言,方法是学习在句子和吵闹的视频片段之间绘制图,而没有明确的说明。首先,我们定义一个自我监督的学习框架,以捕捉跨模式的信息。然后推出一个新的对抗性学习模块,以明确处理自然视频中的噪音,在这些视频中,副标题的句子不能保证与视频片段强烈对应。在培训和评估方面,我们提供了一套新的数据集“ApartmenTour”,其中包含大量在线视频和字幕。我们在判决和视频之间的双向检索任务上进行了实验,结果显示我们提议的模型在检索任务上取得了最先进的业绩,超过了几个强有力的基线。数据集可以在https://github.com/zyj-13/WAL下载。