Neural encoder-decoder models of machine translation have achieved impressive results, while learning linguistic knowledge of both the source and target languages in an implicit end-to-end manner. We propose a framework in which our model begins learning syntax and translation interleaved, gradually putting more focus on translation. Using this approach, we achieve considerable improvements in terms of BLEU score on relatively large parallel corpus (WMT14 English to German) and a low-resource (WIT German to English) setup.
翻译:机器翻译的神经编码器脱码模型取得了令人印象深刻的成果,同时以端对端的方式学习了源语言和目标语言的语言知识。我们提出了一个框架,让我们的模式开始学习语法和翻译,逐渐更加注重翻译。 使用这种方法,我们在相对大的平行体(WMT14英文对德文)和低资源(WIT 德文对英文)的设置方面,在BLEU得分方面取得了相当大的改进。