Due to high annotation costs making the best use of existing human-created training data is an important research direction. We, therefore, carry out a systematic evaluation of transferability of BERT-based neural ranking models across five English datasets. Previous studies focused primarily on zero-shot and few-shot transfer from a large dataset to a dataset with a small number of queries. In contrast, each of our collections has a substantial number of queries, which enables a full-shot evaluation mode and improves reliability of our results. Furthermore, since source datasets licences often prohibit commercial use, we compare transfer learning to training on pseudo-labels generated by a BM25 scorer. We find that training on pseudo-labels -- possibly with subsequent fine-tuning using a modest number of annotated queries -- can produce a competitive or better model compared to transfer learning. Yet, it is necessary to improve the stability and/or effectiveness of the few-shot training, which, sometimes, can degrade performance of a pretrained model.
翻译:由于说明成本高,最佳利用现有人类创造的培训数据是一项重要的研究方向。因此,我们系统地评估了基于BERT的神经等级模型在五个英国数据集中的可转让性。以前的研究主要侧重于零点和几发从大型数据集转移到数据集,但查询数量较少。相比之下,我们收集的每份都有大量的查询,使得能够有一个全速评价模式,并提高我们结果的可靠性。此外,由于源数据集许可证常常禁止商业使用,我们把学习与一个BB25计分器生成的假标签培训进行比较。我们发现,假标签培训 -- -- 可能随后使用少量附加说明的查询进行微调 -- -- 能够产生一种竞争性或更好的模型,与转移学习相比。然而,有必要提高少发培训的稳定性和(或)效力,这种培训有时会降低预设模型的性能。