Existing multimodal machine translation (MMT) datasets consist of images and video captions or general subtitles, which rarely contain linguistic ambiguity, making visual information not so effective to generate appropriate translations. We introduce VISA, a new dataset that consists of 40k Japanese-English parallel sentence pairs and corresponding video clips with the following key features: (1) the parallel sentences are subtitles from movies and TV episodes; (2) the source subtitles are ambiguous, which means they have multiple possible translations with different meanings; (3) we divide the dataset into Polysemy and Omission according to the cause of ambiguity. We show that VISA is challenging for the latest MMT system, and we hope that the dataset can facilitate MMT research.
翻译:现有的多式联运机器翻译数据集由图像和视频字幕或一般字幕组成,很少包含语言模糊,使视觉信息不那么有效,无法产生适当的翻译。我们引入了VISA,这是一个由40k日本-英语平行句子配对和相应的视频剪辑组成的新数据集,具有以下关键特征:(1)平行句子是电影和电视片段的字幕;(2)源字幕含糊不清,这意味着它们有多种不同含义的可能翻译;(3)我们根据模糊原因将数据集分为Polysemy和Omission。我们表明VISA对最新的MMT系统具有挑战性,我们希望该数据集能够便利MMT的研究。