Natural language processing of Low-Resource Languages (LRL) is often challenged by the lack of data. Therefore, achieving accurate machine translation (MT) in a low-resource environment is a real problem that requires practical solutions. Research in multilingual models have shown that some LRLs can be handled with such models. However, their large size and computational needs make their use in constrained environments (e.g., mobile/IoT devices or limited/old servers) impractical. In this paper, we address this problem by leveraging the power of large multilingual MT models using knowledge distillation. Knowledge distillation can transfer knowledge from a large and complex teacher model to a simpler and smaller student model without losing much in performance. We also make use of high-resource languages that are related or share the same linguistic root as the target LRL. For our evaluation, we consider Luxembourgish as the LRL that shares some roots and properties with German. We build multiple resource-efficient models based on German, knowledge distillation from the multilingual No Language Left Behind (NLLB) model, and pseudo-translation. We find that our efficient models are more than 30\% faster and perform only 4\% lower compared to the large state-of-the-art NLLB model.
翻译:低源语言(LLL)的自然语言处理往往因缺乏数据而面临挑战。因此,在低资源环境中实现准确的机器翻译(MT)是一个实际问题,需要实际的解决办法。多语言模型的研究表明,有些LLL可以使用这种模型。但是,它们的庞大规模和计算需要使得它们在有限的环境中使用(例如移动/互联网设备或有限的/老服务器)不切实际。在本文中,我们通过利用大型多语言MT模型的力量,利用知识蒸馏来解决这一问题。知识蒸馏可以把知识从一个大型和复杂的教师模型转移到一个更简单和较小的学生模型,而不会在性能方面损失很多。我们还利用与目标LLLLL有关系或具有相同语言根基的高资源语言语言。我们的评价认为,卢森堡是与德国分享一些根基和特性的LLLLS。我们根据德语建立多种资源效率模型,从多语言的“不留语言后面”模型(NLLLB)中知识蒸馏,以及假转换。我们发现,我们高效的模型比NLLLLV要快30以上,并且只进行更低的模型。</s>