As the number of open and shared scientific datasets on the Internet increases under the open science movement, efficiently retrieving these datasets is a crucial task in information retrieval (IR) research. In recent years, the development of large models, particularly the pre-training and fine-tuning paradigm, which involves pre-training on large models and fine-tuning on downstream tasks, has provided new solutions for IR match tasks. In this study, we use the original BERT token in the embedding layer, improve the Sentence-BERT model structure in the model layer by introducing the SimCSE and K-Nearest Neighbors method, and use the cosent loss function in the optimization phase to optimize the target output. Our experimental results show that our model outperforms other competing models on both public and self-built datasets through comparative experiments and ablation implementations. This study explores and validates the feasibility and efficiency of pre-training techniques for semantic retrieval of Chinese scientific datasets.
翻译:随着开放科学运动下互联网上开放、共享科学数据集的数量增加,高效地检索这些数据集成为信息检索(IR)研究的关键任务。近年来,大型模型的发展,特别是预训练和微调范式,既通过在大型模型上进行预训练,又在下游任务上进行微调,为IR匹配任务提供了新的解决方案。在本研究中,我们在嵌入层中使用原始的BERT令牌,在模型层中通过引入SimCSE和K-Nearest Neighbors方法改进Sentence-BERT模型结构,并在优化阶段使用cosent loss函数优化目标输出。通过比较实验和消融实现,我们的实验结果表明,我们的模型在公共和自建数据集上均优于其他竞争模型。这项研究探索并验证了预训练技术在中文科学数据集的语义检索中的可行性和效率。