A popular approach to creating a zero-shot cross-language retrieval model is to substitute a monolingual pretrained language model in the retrieval model with a multilingual pretrained language model such as Multilingual BERT. This multilingual model is fined-tuned to the retrieval task with monolingual data such as English MS MARCO using the same training recipe as the monolingual retrieval model used. However, such transferred models suffer from mismatches in the languages of the input text during training and inference. In this work, we propose transferring monolingual retrieval models using adapters, a parameter-efficient component for a transformer network. By adding adapters pretrained on language tasks for a specific language with task-specific adapters, prior work has shown that the adapter-enhanced models perform better than fine-tuning the entire model when transferring across languages in various NLP tasks. By constructing dense retrieval models with adapters, we show that models trained with monolingual data are more effective than fine-tuning the entire model when transferring to a Cross Language Information Retrieval (CLIR) setting. However, we found that the prior suggestion of replacing the language adapters to match the target language at inference time is suboptimal for dense retrieval models. We provide an in-depth analysis of this discrepancy between other cross-language NLP tasks and CLIR.
翻译:创建零点跨语言检索模型的流行做法是用多语言BERT等多语种预先培训的语言模型取代检索模型中单一语言的预先培训语言模型。这一多语种模型与使用单一语言检索模型的单一语言数据(如英语MS MARCO)相比,使用与使用单一语言检索模型相同的培训配方。然而,在培训和推理过程中,这种传输模型因输入文本语言的不匹配而受到影响。在这项工作中,我们提议使用一个变压器网络的参数效率组件,用一个单一语言的检索模型来取代一个单一语言的预培训语言模式。通过添加特定语言的预先培训语言任务适应者,并配有特定任务适应器的适应器,这种多语种模式经过微调后,在将各种语言传输到全国语言定位方案时,适应器的强化模型比整个模式的微调更好。我们用单一语言数据培训的模型在向跨语言信息Retrival(CLIR)设置时比对整个模型进行微调更有效。然而,我们发现,先前提出的替换语言适应者在高度语言深度语言检索中进行我们的数据检索中提供其他时间差异分析的CL。