Cross-lingual Machine Reading Comprehension (xMRC) is challenging due to the lack of training data in low-resource languages. The recent approaches use training data only in a resource-rich language like English to fine-tune large-scale cross-lingual pre-trained language models. Due to the big difference between languages, a model fine-tuned only by a source language may not perform well for target languages. Interestingly, we observe that while the top-1 results predicted by the previous approaches may often fail to hit the ground-truth answers, the correct answers are often contained in the top-k predicted results. Based on this observation, we develop a two-stage approach to enhance the model performance. The first stage targets at recall: we design a hard-learning (HL) algorithm to maximize the likelihood that the top-k predictions contain the accurate answer. The second stage focuses on precision: an answer-aware contrastive learning (AA-CL) mechanism is developed to learn the fine difference between the accurate answer and other candidates. Our extensive experiments show that our model significantly outperforms a series of strong baselines on two cross-lingual MRC benchmark datasets.
翻译:由于缺少低资源语言的培训数据,跨语言机器阅读理解系统(XMRC)具有挑战性。最近的方法只使用英语等资源丰富的语言来微调大规模跨语言的经过训练的跨语言模式。由于语言之间的巨大差异,只用源语言微调的模型对目标语言可能效果不佳。有趣的是,我们注意到,虽然以往方法预测的上一级结果往往无法达到地面真相答案,但正确的答案往往包含在最高千位预测结果中。根据这一观察,我们制定了一个两阶段的方法来提高模型的性能。回顾的第一阶段目标:我们设计了一个硬学习(HL)算法,以最大限度地提高顶级预测包含准确答案的可能性。第二阶段的重点是精确度:开发一个有答案的对比性学习(A-CLA)机制,以了解准确答案与其他候选人之间的细微差别。我们的广泛实验显示,我们的模型大大超出两个跨语言MRC基准数据集的坚实基线系列。