Transfer learning has been an important technique for low-resource neural machine translation. In this work, we build two systems to study how relatedness can benefit the translation performance. The primary system adopts machine translation model pre-trained on related language pair and the contrastive system adopts that pre-trained on unrelated language pair. We show that relatedness is not required for transfer learning to work but does benefit the performance.
翻译:传输学习是低资源神经机器翻译的重要技术。 在这项工作中,我们建立了两个系统来研究关联性如何有利于翻译绩效。 初级系统采用了机械翻译模型,在相关语言配对方面进行了预先培训,而对比系统则采用了对非相关语言配对的预先培训。 我们显示,将学习转移到工作不需要关联性,但有利于工作绩效。