Recently, there has been a surge in research in multimodal machine translation (MMT), where additional modalities such as images are used to improve translation quality of textual systems. A particular use for such multimodal systems is the task of simultaneous machine translation, where visual context has been shown to complement the partial information provided by the source sentence, especially in the early phases of translation. In this paper, we propose the first Transformer-based simultaneous MMT architecture, which has not been previously explored in the field. Additionally, we extend this model with an auxiliary supervision signal that guides its visual attention mechanism using labelled phrase-region alignments. We perform comprehensive experiments on three language directions and conduct thorough quantitative and qualitative analyses using both automatic metrics and manual inspection. Our results show that (i) supervised visual attention consistently improves the translation quality of the MMT models, and (ii) fine-tuning the MMT with supervision loss enabled leads to better performance than training the MMT from scratch. Compared to the state-of-the-art, our proposed model achieves improvements of up to 2.3 BLEU and 3.5 METEOR points.
翻译:最近,对多式联运机器翻译(MMT)的研究激增,利用图像等额外模式提高文本系统的翻译质量,这种多式联运系统的一个特别用途是同步机器翻译的任务,其视觉背景显示是对源句提供的部分信息的补充,特别是在翻译的早期阶段。在本文件中,我们提议了第一个基于变压器的同步MMT结构,这个结构以前在实地尚未探讨过。此外,我们扩展了这一模式,以辅助性监督信号指导其视觉关注机制,使用标记的短语区域校正。我们对三种语言方向进行全面实验,并利用自动计量和人工检查进行彻底的定量和定性分析。我们的结果显示:(一) 监督的视觉关注不断提高MMT模型的翻译质量,以及(二) 以监督损失对MMMT进行微调,这比对MMT的从头到头的培训效果更好。与最新技术相比,我们提议的模型改进了2.3 BLEU和3.5 METEOR点。