We translate a closed text that is known in advance into a severely low resource language by leveraging massive source parallelism. Our contribution is four-fold. Firstly, we rank 124 source languages empirically to determine their closeness to the low resource language and select the top few. We call the linguistic definition of language family Family of Origin (FAMO), and we call the empirical definition of higher-ranked languages using our metrics Family of Choice (FAMC). Secondly, we build an Iteratively Pretrained Multilingual Order-preserving Lexiconized Transformer (IPML) to train on ~1,000 lines (~3.5%) of low resource data. Using English as a hypothetical low resource language to translate from Spanish, we obtain a +24.7 BLEU increase over a multilingual baseline, and a +10.2 BLEU increase over our asymmetric baseline in the Bible dataset. Thirdly, we also use a real severely low resource Mayan language, Eastern Pokomchi. Finally, we add an order-preserving lexiconized component to translate named entities accurately. We build a massive lexicon table for 2,939 Bible named entities in 124 source languages, and include many that occur once and covers more than 66 severely low resource languages. Training on randomly sampled 1,093 lines of low resource data, we reach a 30.3 BLEU score for Spanish-English translation testing on 30,022 lines of Bible, and a 42.8 BLEU score for Portuguese-English translation on the medical EMEA dataset.
翻译:我们通过利用大量源平行论,将一个事先已知的封闭文本翻译成非常低的资源语言。 我们的贡献是四倍。 首先,我们根据经验将124种源语言排在第124种,以确定其与低资源语言的近距离,并选择顶数。 我们称语言“起源家庭”为语言定义,我们用我们的衡量标准将高层次语言的经验定义转换为“选择家庭” (FAMS) 。 其次,我们用一种未经预先训练的多语种语言保护语言(IPML), 用于对资源数据低的1 000行(~3.5%) 进行低资源数据培训。 我们用英语作为假设的低资源语言从西班牙语翻译,我们获得了+24.7 BLEU的增量, 并使用+10.2 BLEU的增量。 第三,我们还使用一种非常低的资源语言“东波科姆奇” 。 最后,我们增加了一个按顺序保存的词汇化的组件,用于准确翻译被命名的实体。 我们用2,939种假设的英语低资源语言从西班牙语翻译,我们用124种的B类样本测试, 30种语言,我们制作了一个大规模的英语数据。