We introduce a new task, named video corpus visual answer localization (VCVAL), which aims to locate the visual answer in a large collection of untrimmed instructional videos using a natural language question. This task requires a range of skills - the interaction between vision and language, video retrieval, passage comprehension, and visual answer localization. In this paper, we propose a cross-modal contrastive global-span (CCGS) method for the VCVAL, jointly training the video corpus retrieval and visual answer localization subtasks with the global-span matrix. We have reconstructed a dataset named MedVidCQA, on which the VCVAL task is benchmarked. Experimental results show that the proposed method outperforms other competitive methods both in the video corpus retrieval and visual answer localization subtasks. Most importantly, we perform detailed analyses on extensive experiments, paving a new path for understanding the instructional videos, which ushers in further research.
翻译:我们引入了一个新的任务,名为视频文件视觉解答本地化(VCVAL),目的是利用自然语言问题将视觉解答定位在大量未剪辑的教学视频中。 这项任务需要一系列技能 — — 视觉和语言的互动、视频检索、通过理解和视觉解答本地化。 在本文中,我们为VCVAL提出了一种跨模式的对比式全局(CCGS)方法,与全球空间矩阵共同培训视频文件检索和视觉解答本地化子任务。 我们重建了一个名为MedVidCQA的数据集,这是VCVAL任务的基准。 实验结果显示,拟议方法在视频材料检索和视觉解答本地化子任务中都超越了其他竞争性方法。 最重要的是,我们对广泛的实验进行了详细分析,为理解教学视频开辟了一条新的路径,这带来了进一步的研究。