Benefiting from transformer-based pre-trained language models, neural ranking models have made significant progress. More recently, the advent of multilingual pre-trained language models provides great support for designing neural cross-lingual retrieval models. However, due to unbalanced pre-training data in different languages, multilingual language models have already shown a performance gap between high and low-resource languages in many downstream tasks. And cross-lingual retrieval models built on such pre-trained models can inherit language bias, leading to suboptimal result for low-resource languages. Moreover, unlike the English-to-English retrieval task, where large-scale training collections for document ranking such as MS MARCO are available, the lack of cross-lingual retrieval data for low-resource language makes it more challenging for training cross-lingual retrieval models. In this work, we propose OPTICAL: Optimal Transport distillation for low-resource Cross-lingual information retrieval. To transfer a model from high to low resource languages, OPTICAL forms the cross-lingual token alignment task as an optimal transport problem to learn from a well-trained monolingual retrieval model. By separating the cross-lingual knowledge from knowledge of query document matching, OPTICAL only needs bitext data for distillation training, which is more feasible for low-resource languages. Experimental results show that, with minimal training data, OPTICAL significantly outperforms strong baselines on low-resource languages, including neural machine translation.
翻译:从基于变压器的经过训练的预先语言模型中受益的神经排位模型已经取得重大进展。最近,多语言预先训练的语文模型的出现为设计神经跨语言检索模型提供了巨大的支持。然而,由于不同语言的培训前数据不平衡,多语言模型已经表明在许多下游任务中高语言和低资源语言之间的性能差距。在这种经过训练的模型上建立的跨语言检索模型可以继承语言偏见,导致低资源语言的不理想结果。此外,与英语到英语的检索任务不同,在诸如MS MARCO等文件排名方面有大量的标准化培训收藏,低资源语言的跨语言检索数据缺乏使培训跨语言检索模型更具挑战性。在这项工作中,我们建议:优化运输,为低资源跨语言的跨语言检索而优化。为了将高资源语言和低资源语言的跨语言标识统一作为最佳运输任务,从经过良好训练的单一语言检索模型中学习。通过将跨语言的跨语言知识与跨语言检索数据搜索模型进行对比,只有最起码的数据格式,用于最起码的离职后健康保险,只有最低限度的数据形式。