Multimodal Machine Translation (MMT) focuses on enhancing text-only translation with visual features, which has attracted considerable attention from both natural language processing and computer vision communities. Recent advances still struggle to train a separate model for each language pair, which is costly and unaffordable when the number of languages increases in the real world. In other words, the multilingual multimodal machine translation (Multilingual MMT) task has not been investigated, which aims to handle the aforementioned issues by providing a shared semantic space for multiple languages. Besides, the image modality has no language boundaries, which is superior to bridging the semantic gap between languages. To this end, we first propose the Multilingual MMT task by establishing two new Multilingual MMT benchmark datasets covering seven languages. Then, an effective baseline LVP-M3 using visual prompts is proposed to support translations between different languages, which includes three stages (token encoding, language-aware visual prompt generation, and language translation). Extensive experimental results on our constructed benchmark datasets demonstrate the effectiveness of LVP-M3 method for Multilingual MMT.
翻译:多式机器翻译(MMT)工作的重点是加强具有视觉特征的只读文本翻译,这吸引了自然语言处理和计算机视觉界的极大关注。最近的进展仍然是难以为每种语文分别培训一个模型,当现实世界语言数量增加时,这种模型成本很高,负担不起。换句话说,多语种多式联运机器翻译(多语种MMT)任务尚未调查,其目的是通过为多种语言提供一个共同的语义空间来处理上述问题。此外,图像模式没有语言界限,这优于弥合语言之间的语义差距。为此目的,我们首先提出多语种MMMT任务,方法是建立两套新的多语种MMMT基准数据集,涵盖七种语言。然后,建议使用视觉提示的有效基线LVP-M3,以支持不同语言之间的翻译,其中包括三个阶段(对调、语言有觉识的视觉生成和语言翻译)。我们构建的基准数据集的广泛实验结果显示多语种MMMMT方法的有效性。