For most language combinations, parallel data is either scarce or simply unavailable. To address this, unsupervised machine translation (UMT) exploits large amounts of monolingual data by using synthetic data generation techniques such as back-translation and noising, while self-supervised NMT (SSNMT) identifies parallel sentences in smaller comparable data and trains on them. To date, the inclusion of UMT data generation techniques in SSNMT has not been investigated. We show that including UMT techniques into SSNMT significantly outperforms SSNMT and UMT on all tested language pairs, with improvements of up to +4.3 BLEU, +50.8 BLEU, +51.5 over SSNMT, statistical UMT and hybrid UMT, respectively, on Afrikaans to English. We further show that the combination of multilingual denoising autoencoding, SSNMT with backtranslation and bilingual finetuning enables us to learn machine translation even for distant language pairs for which only small amounts of monolingual data are available, e.g. yielding BLEU scores of 11.6 (English to Swahili).
翻译:对于大多数语言组合而言,平行数据要么稀缺,要么根本不存在。为了解决这一问题,未经监督的机器翻译(UMT)利用大量单语数据,使用合成数据生成技术,如背译和点音,而自监督的NMT(SSNMT)在较小的可比数据和对这些数据的培训中确定了平行的句子。迄今为止,尚未调查将UMT数据生成技术纳入SSNMT的情况。我们显示,将UMT技术纳入SSNMT, 在所有测试的语言配对中大大优于SNMT和UMT, 改进到+4.3 BLEU, +50.8 BLEU, +51.5 超过SSNMT, 统计UMT和混合UMT,分别针对南非荷兰语和英语。我们进一步表明,将多语种解译自动编码、SNSNMT与回译和双语微调相结合,使我们能够学习机器翻译,即使远程语言配对,只提供少量单语数据,例如产生11.6的BLEU分数(英语至斯瓦希利)。