Massively multilingual machine translation (MT) has shown impressive capabilities, including zero and few-shot translation between low-resource language pairs. However, these models are often evaluated on high-resource languages with the assumption that they generalize to low-resource ones. The difficulty of evaluating MT models on low-resource pairs is often due to lack of standardized evaluation datasets. In this paper, we present MENYO-20k, the first multi-domain parallel corpus with a special focus on clean orthography for Yor\`ub\'a--English with standardized train-test splits for benchmarking. We provide several neural MT benchmarks and compare them to the performance of popular pre-trained (massively multilingual) MT models both for the heterogeneous test set and its subdomains. Since these pre-trained models use huge amounts of data with uncertain quality, we also analyze the effect of diacritics, a major characteristic of Yor\`ub\'a, in the training data. We investigate how and when this training condition affects the final quality and intelligibility of a translation. Our models outperform massively multilingual models such as Google ($+8.7$ BLEU) and Facebook M2M ($+9.1$ BLEU) when translating to Yor\`ub\'a, setting a high quality benchmark for future research.
翻译:大规模多语言机器翻译(MT)显示出令人印象深刻的能力,包括低资源语言对口之间零翻译和少发翻译。然而,这些模型往往在高资源语言上进行评估,假设这些模型一般为低资源语言。评估低资源对口的MT模型的困难往往是由于缺乏标准化的评价数据集。在本论文中,我们介绍了第一个多数据平行的多数据系统MONYO-20k,这是Yor ⁇ ub\'a-English的第一个多数据系统,特别侧重于清洁的正文摄影,并配有标准化的培训测试分解基准。我们提供了几个神经MT基准,并将其与广受欢迎的预先训练(多语言的多语言)MT模型的性能进行比较。由于这些预先培训的模式使用大量质量不确定的数据,我们还分析了在培训数据中Yor ⁇ ub\a的主要特征,即Yorüub\a。我们调查这一培训条件如何影响最终质量和智能的翻译。当谷歌+BAU8高质量模型时,我们的模型超越了YAU+B8的大规模多语言模型。