Multi-source translation is an approach to exploit multiple inputs (e.g. in two different languages) to increase translation accuracy. In this paper, we examine approaches for multi-source neural machine translation (NMT) using an incomplete multilingual corpus in which some translations are missing. In practice, many multilingual corpora are not complete due to the difficulty to provide translations in all of the relevant languages (for example, in TED talks, most English talks only have subtitles for a small portion of the languages that TED supports). Existing studies on multi-source translation did not explicitly handle such situations. This study focuses on the use of incomplete multilingual corpora in multi-encoder NMT and mixture of NMT experts and examines a very simple implementation where missing source translations are replaced by a special symbol <NULL>. These methods allow us to use incomplete corpora both at training time and test time. In experiments with real incomplete multilingual corpora of TED Talks, the multi-source NMT with the <NULL> tokens achieved higher translation accuracies measured by BLEU than those by any one-to-one NMT systems.
翻译:多源翻译是一种利用多种投入(如两种不同语文)的方法,以提高翻译准确性。在本文中,我们研究多源神经机翻译(NMT)使用不完整的多语种材料的方法,其中缺少一些翻译。实际上,许多多语种翻译并不完整,因为很难提供所有相关语言的翻译(例如,在TED会谈中,大多数英语讲座只配有TED会谈所支持的一小部分语言的字幕) 。关于多源翻译的现有研究没有明确处理这种情况。本研究侧重于多语种翻译(NMT)和NMT专家混合使用不完整的多语种神经机翻译(NMT)的方法,并研究一种非常简单的实施方法,即缺失的源翻译被一个特殊符号<NULLL>取代。这些方法使我们能够在培训时间和试验时间使用不完整的全语种翻译(例如,在TED会谈中,大多数英语讲座只配有真正不完整的多语种翻译的语种翻译)实验中,由<NULLL > 符号测量的多语种翻译比任何一比NMT系统都高。