Recently, numbers of works shows that the performance of neural machine translation (NMT) can be improved to a certain extent with using visual information. However, most of these conclusions are drawn from the analysis of experimental results based on a limited set of bilingual sentence-image pairs, such as Multi30K. In these kinds of datasets, the content of one bilingual parallel sentence pair must be well represented by a manually annotated image, which is different with the actual translation situation. Some previous works are proposed to addressed the problem by retrieving images from exiting sentence-image pairs with topic model. However, because of the limited collection of sentence-image pairs they used, their image retrieval method is difficult to deal with the out-of-vocabulary words, and can hardly prove that visual information enhance NMT rather than the co-occurrence of images and sentences. In this paper, we propose an open-vocabulary image retrieval methods to collect descriptive images for bilingual parallel corpus using image search engine. Next, we propose text-aware attentive visual encoder to filter incorrectly collected noise images. Experiment results on Multi30K and other two translation datasets show that our proposed method achieves significant improvements over strong baselines.
翻译:最近,作品的数量表明,使用视觉信息可以在一定程度上改进神经机翻译(NMT)的性能,但是,这些结论大多是根据一套有限的双语句式图像(如Multi30K)分析实验结果而得出的。在这些数据集中,一个双语平行句子的内容必须由人工加注的图像来很好地代表,这与实际翻译情况不同。一些先前的作品是用主题模型从离句图像配对中提取图像来解决这个问题的。然而,由于它们使用的句式图像配对收集有限,因此其图像检索方法很难处理外语词,很难证明视觉信息能加强NMT,而不是图像和句的共发情况。在本文中,我们建议采用一个公开的词汇图像检索方法,用图像搜索引擎收集双语平行材料的描述性图像。我们提议通过文本识别视觉编码来筛选错误收集的噪音图像。实验结果和另外两种数据转换方法都显示,Multi30K的显著基线和两种数据转换方法能够实现我们所建议的重大改进。